Back
Job   UK   W Yorks   Leeds Area   Engineer -

Spark Scala Engineer | Engineer in Engineering Job in Leeds WYK | 7216565921

This listing was posted on iSmartRecruit.

Spark Scala Engineer

Location:
Leeds, W Yorks
Description:

Spark Scala Engineer Job Type : Contract 6+ Months – Inside Ir35 Leeds, UK ( Hybrid) Candidate is required to be at client site at Leeds 3 days/week. Roles and Responsibilities (Spark Scala Engineer): Develop and maintain data pipelines: You'll be responsible for designing, building, and maintaining data pipelines using Apache Spark and Scala. This includes tasks like: • Extracting data from various sources (databases, APIs, files) • Transforming and cleaning the data • Loading the data into data warehouses or data lakes (e.g., BigQuery, Amazon Redshift) Automating the data pipeline execution using scheduling tools (e.g., Airflow) • Work with Big Data technologies: You'll likely work with various Big Data technologies alongside Spark, including: o Hadoop Distributed File System (HDFS) for storing large datasets o Apache Kafka for real-time data streaming o Apache Hive for data warehousing on top of HDFS o Cloud platforms like AWS, Azure, or GCP for deploying and managing your data pipelines Data analysis and modeling: While the primary focus might be on data engineering, some JDs might require basic data analysis skills: Writing analytical queries using SQL or Spark SQL to analyze processed data and Building simple data models to understand data relationships Required Skills: • Programming languages: Proficiency in Scala and Spark is essential. Familiarity with Python and SQL is often a plus. • Big Data technologies: Understanding of HDFS, Kafka, Hive, and cloud platforms is valuable. • Data engineering concepts: Knowledge of data warehousing, data pipelines, data modeling, and data cleansing techniques is crucial. • Problem-solving and analytical skills: You should be able to analyze complex data problems, design efficient solutions, and troubleshoot issues. • Communication and collaboration: The ability to communicate effectively with data scientists, analysts, and business stakeholders is essential. Desired Skills (may vary): • Machine learning libraries: Familiarity with Spark ML or other machine learning libraries in Scala can be advantageous. • Cloud computing experience: Experience with cloud platforms like AWS, Azure, or GCP for data pipelines deployment is a plus. • DevOps tools: Knowledge of DevOps tools like Git, CI/CD pipelines, and containerization tools (Docker, Kubernetes) can be beneficial.
Posted:
March 26 on iSmartRecruit
Visit Our Partner Website
This listing was posted on another website. Click here to open: Go to iSmartRecruit
Important Safety Tips
  • Always meet the employer in person.
  • Avoid sharing sensitive personal and financial information.
  • Avoid employment offers that require a deposit or investment.

To learn more, visit the Safety Center or click here to report this listing.

More About this Listing: Spark Scala Engineer
Spark Scala Engineer is a Engineering Engineer Job located in Leeds WYK. Find other listings like Spark Scala Engineer by searching Oodle for Engineering Engineer Jobs.