The main cloud ecosystem being used is AWS.
Job Description:
Bachelor’s Degree in Computer Science, Computer Engineering or related technical field required. Master’s Degree or other advanced degree preferred.
4-6+ years of total experience of which 2+ years of relevant experience in Big Data platforms.
Strong analytical, problem solving and communication/articulation skills.
3+ years of experience with big data and the Hadoop ecosystem (Spark, HDFS, Hive, Sqoop, Hudi, Parquet, apache Nifi and Kafka).
Hands On in Scala/Spark
Python is a plus
Hands on knowledge of Oracle, MS-SQL databases.
Experience with job schedulers such as CA or AutoSys.
Experience with source code control systems (e.g. Git, Jenkins, Artifactory).
Experience with platforms such as Tableau and AtScale is a plus.