Data Engineer
Data Engineer with Bachelor’s Degree in Computer Science, Computer Information Systems, Information Technology, or a combination of education and experience equating to the U.S. equivalent of a Bachelor’s degree in one of the aforementioned subjects.
Job Duties and Responsibilities:
- Define the end-to-end solution architecture for large scale technology projects and deep technical expertise in distributed processing, real-time and scalable systems.
- Architect, Design and Develop Big Data streaming applications to use high performance and highly available NoSQL Key Value store Redis for check pointing.
- Design and Develop Spark applications in Scala that use DOM/SAX parsers for parsing incoming raw string/XML data.
- Design and develop AWS Cloud deployment scripts using AWS Cloud Formation Templates, Terraform and Ansible.
- Design, develop and troubleshoot Hive, Pig, Flume, Mango DB, Sqoop, Zookeeper, Spark, MapReduce2, YARN, HBase, Kafka and Strom.
- Fine tune applications and systems for high performance and higher volume throughput and Pre-Process using Hive and Pig.
- Translate load and exhibit unrelated data sets in various formats and sources like JSON, text files, Kafka queues and log data.
- Install and configure Docker images for Telegraf, InfluxDB, Grafana, Kapacitor on AWS cloud monitoring EC2.
- Design and Develop Kapcitor scripts for alerting as push notifications, SMS, Email and Slack alerts.
- Define Technology/Big Data strategy and roadmap for client accounts, and guides implementation of that strategy within projects.
- Drive excellent management skills are required to deliver complex projects, including effort/time estimation, building detailed work breakdown structure (WBS), managing critical path, and using PM tools and platforms.
- Build scalable client engagement level processes for faster turnaround & higher accuracy.
- Run regular project reviews and audits to ensure that projects are being executed within the guardrails agreed by all stakeholder.
- Manage the client stakeholders, and their expectations, with a regular cadence of weekly meetings and status updates.
Skills / Knowledge required:
- 2+ years of hands-on experience on Java / Spring Platforms.
- Expert level hands-on experience in using AWS Identity Management, RedShift, AWS Elastic, Athena, S3, AWA Glue, S3, SSE-C, SS3-KMS + Redshift (encryption), Glacier, Kinesis.
- Expertise in deploying applications by leveraging AWS App Service, AWS, API Apps, and Web Apps.
- Expert-level knowledge and hands-on experience in Containerization, Image Building, Packaging, Creating CI/CD Pipelines, managing infrastructure as a code.
- Good Knowledge of Network/Infra/Security aspects for Azure.
- Good knowledge of embedding ML into applications, application integration, application identities & security, and developing mobile applications like chatbots by leveraging native Azure services, Knowledge on Power BI.
- Agile ways of working.
- Ability to work independently as individual contributor and as a team member.
Technologies Involved:
- Experience in AWS EMR, AWS Glue, SQL, ETL Architecture
- Experience in Data Modeling, API design.
- Have experience with AWS NoSQL DB (DocumentDB, DynamoDB) & AuroraDB.
- Implementation experience with AWS ElasticSearch and APIs.
- S3, SSE-C, SS3-KMS + Redshift (encryption), Glacier, Kinesis.
- Java, Spring Boot, Junit.
Work location is Portland, ME with required travel to client locations throughout USA.
Rite Pros is an equal opportunity employer (EOE).
Please Mail Resumes to:
Rite Pros, Inc.
565 Congress St, Suite # 305
Portland, ME 04101.
Email: resumes@ritepros.com