Collaborate in integrating AI/ML solutions with existing Node.js applications.
Conduct data preprocessing, feature engineering, and model evaluation.
Optimize database interactions using PostgreSQL to efficiently handle large datasets.
Manage and query large datasets with PostgreSQL.
Leverage AWS services for deploying, scaling, and maintaining AI/ML models and data pipelines.
Deploy applications and manage data with AWS (Eg. EC2, S3, Lambda, Elastic Beanstalk and others).
Manage workflows and data pipelines (Eg. Apache Airflow, Camunda or similar)
Perform OCR tasks with Tesseract and advanced image and video analysis with Google Cloud Vision or Amazon Rekognition.
Use Docker for containerizing applications.
Requirements & Skills:
You are proficient in Python, JavaScript, and TypeScript
You have experience with any cloud provider such as AWS (we use AWS), GCP, or Azure.
You have a passion for building innovative solutions that drive impact and automate mundane tasks.
You have expertise in AI/ML model design and development, frameworks, and libraries such as TensorFlow, PyTorch, scikit-learn, Pandas, NumPy, NLTK or spaCy, OpenCV, or similar.
You have experience working with RPA technologies such as Apache Airflow, Camunda, Tesseract, Google Cloud Vision, Amazon Rekognition, and AWS services like EC2, S3, Lambda, and Elastic Beanstalk.
You have knowledge of CI/CD tools and Docker.
You work collaboratively in a fast-paced environment.
You are excited about leveraging OpenAI and Gemini Technologies