Upwork is hiring a Data Engineer (VG)

Data Engineer (VG)

Upwork  ·  US  ·  $6.2k/yr - $10k/yr
about 2 years ago

Job Title: Remote Data Solutions Engineer

Job Description:

We are seeking a talented and highly motivated Remote Data Solutions Engineer to design and implement data solutions that align with business, technical, and user requirements. As a Data Solutions Engineer, you will be responsible for building modern data pipelines, preprocessing data using languages like SQL and Python, and analyzing raw data from various sources to develop and maintain high-quality datasets. This role requires a deep understanding of data modeling for relational databases, NoSQL databases, and data warehouses, as well as experience with big data tools like Hadoop, Spark, Kafka, and more.

Responsibilities:

Design data solutions to meet business, technical, and user requirements, including building end-to-end data pipelines.

Preprocess data using SQL, Python, or other relevant languages.

Analyze raw data from multiple sources, develop and maintain data sets, and improve data quality and efficiency.

Conduct complex data analysis and report on results.

Create pipelines on Azure Synapse or similar platforms for data processing.

Possess good knowledge of Azure data processing tools.

Identify, design, and implement process improvements, including automating manual processes and optimizing data delivery.

Implement ETL pipelines using Azure Synapse or other similar resources.

Provide datasets tailored to the needs of Data Analysts and Data Scientists.

Improve the utilization of data resources and processes for enhanced performance and cost reduction.

Support compliance with data, information, and security management requirements (GDPR, LGPD, etc.).

Master data lifecycle, standards, and technologies used by the team and the company.

Deliver projects on time and with high quality.

Perform technical documentation of resources, pipelines, data sources, and datasets.

Possess troubleshooting and debugging skills for faster data defect fixing.

Must-Have Requirements:

Strong background in data modeling for relational databases, NoSQL databases, and data warehouses.

Experience with big data tools such as Hadoop, Spark, Kafka, Spark & Kafka Streaming, Python, Scala, Talend, etc.

Bachelor's degree in computer science or related IT courses.

Technical expertise in data models, data mining, and segmentation techniques.

Solid knowledge of relational and NoSQL databases (SQL Server, Snowflake, Cosmos DB).

Proficiency in programming languages like Java and Python.

Hands-on experience in building pipelines and ETL with Azure Synapse, DBT, Stitch, or similar tools.

Analytical, problem-solving, and decision-making skills.

Strong knowledge of working on and managing big data environments.

Excellent analytical, technical, interpersonal, and organizational skills, with the ability to work well in a team.

Experience working in an agile environment.

Excellent communication skills with sponsors, CSM teams, and clients.

We Value:

Good working knowledge of Continuous Delivery Practices with Azure DevOps or similar frameworks.

Experience with data quality assurance.

Ability to work collaboratively within a team, with strong analytical, problem-solving, and communication skills.

Flexibility and adaptability to work in ambiguous situations.

Previous experience working within an Agile team.

Understanding of Agile practices and proficiency in using tools like Azure DevOps to enable the delivery of high-quality data resources.

Employment Type:

This is a full-time remote position with availability required between 10 am to 8 pm. The job is a contract-based role and has no association with any other companies.

Job is closed

This job is already closed and no longer accepting applicants, sorry.