Senior Technical Solutions Engineer, Databricks

Senior Technical Solutions Engineer, Databricks

Company Databricks
Job title Senior Technical Solutions Engineer (Spark)
Job location Seoul, South Korea
Type Full Time

Responsibilities:

  • Performing initial level analysis and troubleshooting issues in Apache Spark™ using Apache Spark™ UI metrics, DAG, Event Logs for multiple customer-reported job slowness issues.
  • Troubleshoot, resolve, and suggest deep code-level analysis of Apache Spark™ to address customer issues related to Apache Spark™ core internals, Apache Spark™ SQL, Structured Streaming, Delta, Lakehouse, and other Databricks runtime features.
  • Assist the customers in setting up reproducible Apache Spark™ problems with solutions in the areas of Apache Spark™ SQL, Delta, Memory Management, Performance tuning, Streaming, Data Science, and Data Integration areas in Apache Spark™.
  • Participate in the Designated Solutions Engineer program and guide one or two of the strategic customers’ daily Apache Spark™ and Cloud issues.
  • Coordinate with Account Executives, Customer Success Engineers, and Resident Solution Architects for coordinating the customer issues and best practices guidelines.
  • Participate in screen sharing meetings, answering Slack channel conversations with our team members and customers, helping in driving the major Apache Spark™issues at an individual contributor level.
  • Build an internal wiki, a knowledge base with technical documentation, manuals for the support team, and for the customers. Help create company documentation and knowledge base articles.
  • Coordinate with Engineering and Backline Support teams to help report product defects.
  • Participate in weekend and weekday on-call rotation and run escalations during databricks runtime outages, incident situations, and plan day-to-day activities, and provide an escalated level of support for important customer operational issues.

Requirements & Skills:

  • 3 years of hands-on experience developing any two or more of the Big Data, Hadoop, Apache Spark™, Machine Learning, Artificial Intelligence, Streaming, Kafka, Data Science, ElasticSearch related industry use cases at the production scale. Spark experience is mandatory.
  • Experience in the performance tuning/troubleshooting of Hive and Apache Spark™-based applications at production scale.
  • Real-time experience in JVM and Memory Management techniques such as Garbage collections, Heap/Thread Dump Analysis.
  • Experience with any SQL-based databases, Data Warehousing/ETL technologies like Informatica, DataStage, Oracle, Teradata, SQL Server, MySQL, and SCD type use cases.
  • Experience with AWS or Azure, or GCP
  • Written and spoken proficiency in both Korean and English

apply for job button