Jakarta, Indonesia**JOB TYPE**
- Full-time**About the Role**:
Create and maintain optimal data pipeline architecture.
-Assemble large, complex data sets that meet functional / non-functional business requirements processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
-Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using **SQL and GCP 'big data' technologies.**Build analytics tools that utilise the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
-Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
-Create data tools for analytics and data scientist team members that assist them in building and optimising our product into an innovative industry leader.
-Work with data and analytics experts to strive for greater functionality in our data systems.
**Requirements**:
Advanced working** SQL **knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
-Experience building and optimizing '**big data' data pipelines, architectures and data sets**Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
-Strong analytic skills related to working with unstructured datasets
-Build processes supporting data transformation, data structures, metadata, dependency and workload management
-A successful history of manipulating, processing and extracting value from large disconnected datasets
-Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores
-Experience supporting and working with cross-functional teams in a dynamic environment
-Experience with one object-oriented/object function scripting languages (**Python, Java, Scala, etc)**Experience with big data tools**: Hadoop, Spark, Kafka, Apache beam etc**Experience with relational SQL and NoSQL databases, including Postgres and Cassandra
-Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.