Senior Scala Engineer

Vacancy details
Software Engineering
Scala Engineer
Senior
India, 
Pune
Hybrid

Our client is a is a location data and technology platform company that empower customers to achieve better outcomes – from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely. They create solutions that fuel innovation, provide opportunity and foster inclusion to improve people’s lives. If you are inspired by an open world and driven to create positive change, join us!

As a Senior Backened Engineer in the Map Data Processing group you will develop smart Map Data processing tools for the state-of-the-art Mapping Technologies. You will work self-sustained in an agile team. Your responsibility will cover developing, extending and maintaining tool and services that processes map data for global navigational database. For that you will be translating product strategies into technology strategies, leading the long-term architectural direction and helping design and build industry-grade customer-facing geo-data-intensive products. You will work closely with other engineering & operations teams, internal users of tool chains your team develops and partner with product managers and the larger map operations business units.

What project we have for you

Join our engineering organization to build the next generation of automated map data processing for a global provider of location and mapping solutions used in automotive and mobility.

You will work on the platform — a system that takes road data from multiple sources (GPS traces, traffic signs, open map data), detects errors in the map, and either auto-corrects them or flags them for human review. The platform produces quality signals and road attribute corrections that feed directly into navigation and autonomous driving products.

Working as part of an agile, cross-functional team, you will build and maintain the Scala-based backend services and distributed data pipelines that keep this platform accurate, reliable, and ready for production load.

What you’ll work on:

  • Distributed batch pipelines for large-scale road dataset processing (Scala + Spark on EMR)
  • Error detection, confidence-based auto-fix logic, and manual review workflows for map attributes
  • Data validation, quality gates, and traceability across the pipeline
  • Cloud-native execution environments (infrastructure automation, CI/CD, operational readiness)

Technologies:

  • Scala
  • Apache Spark
  • Kafka
  • SQL / PostgreSQL
  • AWS (Step Functions, ECS, Lambda, EMR)
  • Akka
  • Apache Airflow
  • Java
  • Python
  • CI/CD (GitHub / GitLab)

What you will do

  • Design and implement scalable backend services and data processing pipelines
  • Participate in architecture/design discussions and contribute to technical decisions
  • Own delivery end-to-end: development, testing, performance optimization, and production support readiness
  • Improve code quality via reviews, automated testing, and engineering best practices
  • Contribute to estimations, planning, and iterative delivery in an Agile/Scrum process
  • Mentor engineers, share knowledge, and promote strong engineering culture

What you need for this

Required skills

  • 5+ years building backend systems with Scala (strong core Scala — primary language)
  • Distributed data processing — Apache Spark or equivalent (production usage required)
  • Kafka — event-driven architecture, producers/consumers, production usage
  • AWS — practical experience with EMR, Step Functions, ECS, or Lambda
  • Strong SQL and data modeling — ideally PostgreSQL (schema design, query optimization, indexes)
  • Strong knowledge of Core Scala and OOP principles
  • Good understanding of concurrency and multithreading concepts
  • JVM fundamentals and performance optimization
  • LLD: modular architecture, clean abstractions, component design
  • API design awareness: RESTful services, backward compatibility
  • Testing strategy: unit, integration, E2E — deliberate approach to coverage
  • Monitoring & observability: logging, metrics, alerting, dashboards in production systems
  • Solid knowledge of data structures and algorithms (Big O, time/space complexity)
  • CI/CD pipelines (GitHub / GitLab) and engineering best practices
  • English: Upper Intermediate+ (written and spoken)

Nice to have

  • Java
  • Python — any meaningful usage (scripting, ETL, data processing, Flask/FastAPI; LLM/MCP tooling is a bonus)
  • Apache Airflow or similar pipeline orchestration
  • Experience with geospatial/map-related data or large-scale data quality systems

What it’s like to work at Intellias

At Intellias, where technology takes center stage, people always come before processes. By creating a comfortable atmosphere in our team, we empower individuals to unlock their true potential and achieve extraordinary results. That’s why we offer a range of benefits that support your well-being and charge your professional growth.
We are committed to fostering equity, diversity, and inclusion as an equal opportunity employer. All applicants will be considered for employment without discrimination based on race, color, religion, age, gender, nationality, disability, sexual orientation, gender identity or expression, veteran status, or any other characteristic protected by applicable law.
We welcome and celebrate the uniqueness of every individual. Join Intellias for a career where your perspectives and contributions are vital to our shared success.

Have not found the most
suitable position
yet?

Leave your resume and we will select a cool option for you.
Good news!
Link copied
Good news!
You did it.
Bad news!
Something went wrong. Please try again.