Middle Scala Engineer

Vacancy details
Software Engineering
Scala Engineer
Middle
India, 
Pune
Hybrid

Our client is a is a location data and technology platform company that empower customers to achieve better outcomes – from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely. They create solutions that fuel innovation, provide opportunity and foster inclusion to improve people’s lives. If you are inspired by an open world and driven to create positive change, join us!

As a Senior Backened Engineer in the Map Data Processing group you will develop smart Map Data processing tools for the state-of-the-art Mapping Technologies. You will work self-sustained in an agile team. Your responsibility will cover developing, extending and maintaining tool and services that processes map data for global navigational database. For that you will be translating product strategies into technology strategies, leading the long-term architectural direction and helping design and build industry-grade customer-facing geo-data-intensive products. You will work closely with other engineering & operations teams, internal users of tool chains your team develops and partner with product managers and the larger map operations business units.

What project we have for you

Join our engineering organization to build the next generation of automated map data processing for a global provider of location and mapping solutions used in automotive and mobility.

You will work on the platform — a system that takes road data from multiple sources (GPS traces, traffic signs, open map data), detects errors in the map, and either auto-corrects them or flags them for human review. The platform produces quality signals and road attribute corrections that feed directly into navigation and autonomous driving products.

Working as part of an agile team on Rakhee’s team, you will build and maintain backend services and data pipelines that keep this platform accurate, reliable, and running at scale. The primary language on this team is Scala or Python — solid engineers in either language are welcome.

What you’ll work on:

  • Backend services and data pipelines for road dataset processing
  • Error detection logic and data validation workflows
  • Data quality gates and traceability concepts
  • Cloud-native execution environments (infrastructure automation, CI/CD, operational readiness)

Technologies:

  • Scala / Python (primary — one or the other)
  • Apache Spark
  • Kafka
  • SQL / PostgreSQL
  • AWS (Step Functions, ECS, Lambda, EMR)
  • Apache Airflow
  • Java
  • CI/CD (GitHub / GitLab)

What you will do

Responsibilities

  • Implement backend services and data processing pipeline components following established architecture
  • Write clean, maintainable code with proper unit and integration test coverage
  • Own assigned tasks end-to-end: development, testing, and documentation
  • Follow team standards for monitoring, logging, and observability
  • Participate in code reviews — both giving and receiving feedback
  • Contribute to estimations, planning, and iterative delivery in an Agile/Scrum process
  • Actively learn from senior engineers and grow technical skills

 

What you need for this

Required Skills

  • 3+ years building backend systems with Scala or Python as your primary language (confirmed in project work — not just listed in skills)
  • Distributed data processing — Apache Spark or equivalent (production or near-production usage required)
  • Kafka — event-driven architecture, producers/consumers; project-level usage required
  • AWS — practical experience with EC2, S3, Lambda, or equivalent cloud services
  • Strong SQL and data modeling basics — ideally PostgreSQL (schema design, query optimization, indexes)
  • Solid knowledge of data structures, OOP, and design patterns
  • LLD principles: modular architecture, clean abstractions
  • API design basics and backward compatibility awareness
  • Testing practices: unit and integration testing — understands why and how
  • Basic monitoring & observability: logs, metrics, dashboards
  • Strong analytical and debugging skills
  • CI/CD pipelines (GitHub / GitLab) and engineering best practices
  • English: Upper Intermediate+ (written and spoken)

Nice to Have

  • The other primary language — if your primary is Scala, Python is a plus; if your primary is Python, Scala is a plus
  • Java
  • Apache Airflow or similar pipeline orchestration
  • LLM/MCP tooling experience
  • Experience with geospatial/map-related data or large-scale data quality systems

 

What it’s like to work at Intellias

At Intellias, where technology takes center stage, people always come before processes. By creating a comfortable atmosphere in our team, we empower individuals to unlock their true potential and achieve extraordinary results. That’s why we offer a range of benefits that support your well-being and charge your professional growth.
We are committed to fostering equity, diversity, and inclusion as an equal opportunity employer. All applicants will be considered for employment without discrimination based on race, color, religion, age, gender, nationality, disability, sexual orientation, gender identity or expression, veteran status, or any other characteristic protected by applicable law.
We welcome and celebrate the uniqueness of every individual. Join Intellias for a career where your perspectives and contributions are vital to our shared success.

Skills

Apache_Spark
AWS
Java
Scala

Have not found the most
suitable position
yet?

Leave your resume and we will select a cool option for you.
Good news!
Link copied
Good news!
You did it.
Bad news!
Something went wrong. Please try again.