Prestigious Fortune 500 Company is currently seeking a Big Data Engineer. Candidate will be responsible for a DevOps approach to development of new systems for analysing data; the coding & development of advanced analytics solutions to make/optimize business decisions and processes; integrating new tools to improve analytics; and address new technical challenges using existing and emerging technology solutions.
- Executes complex functional work tracks and drives the execution of operational/technical objectives for data analytic outputs and business solutions.
- Partners with other internal teams and peers in the department to ensure holistic Big Data solutions meet the needs of various stakeholders.
- With coaching, can identify new areas of data, research and big data technology that can solve business problems
- Leverages and uses Big Data best practices to develop technical solutions used for analytical insights.
- Acts as an Influencer within the department on the effectiveness of Big Data solutions to solve their business problems
- Supports Innovation; regularly provides new ideas to help people, process, and technology that interact with analytic ecosystem.
- With coaching, develops and builds frameworks/prototypes that integrate big data and advanced analytics to make better business decisions.
- Executes on Big Data requests to improve the accuracy, security, quality, completeness, speed of data, and decisions made from Big Data analysis.
- Uses, learns, teaches, and supports a wide variety of Big Data and Data Science tools to achieve results (ie, R, ETL Tools, Hadoop, and others).
- Uses, learns, teaches, and supports a wide variety of programming languages on Big Data and Data Science work (ie Java, C#, Python, and Perl)
- Supports a clear communication strategy that keeps all relevant stakeholders informed and provides an opportunity to influence the direction of the work
- Trains and develops other engineers.
- Strong experience as a Big Data Engineer.
- Bachelor's Degree in Computer Science, MIS, or related area, or equivalent work experience. Master's Degree in a quantitative or scientific field would be a plus.
- Experience in using software development to drive data science & analytic efforts
- Experience with database integration, dataflow management & ETL technologies
- Experience with various data types (eg Relational, Unstructured, Hierarchical, Linked Graph Data)
- Experience in developing, managing, and manipulating large, complex datasets
- Understanding of security risks and vulnerabilities pertaining to open source systems leveraging tools and techniques to minimize risk. Where appropriate, provide recommendations and justifications to ensure speed of access while minimizing risk for scientists and developers.
- Experience and solid understanding of Bigdata ecosystems such as Hadoop, Spark, Streaming, Kafka
- Ability to code and develop prototypes in languages such as Python, Scala, Java, C, R, SQL
- Ability to communicate and present advanced technical topics to general audiences including teams across multiple time zones.
- Leading project teams of various skills levels
- Understanding of predictive modelling techniques, a plus
- Automation, Configuration Management (eg Ansible, Puppet), Dev-ops practices, CI/CD pipelines (eg Jenkins).
- Elementary networking skills, switching, routing, Firewalls, load balancing.
- Linux Containers/Docker.
Associated topics: data administrator, data analyst, data analytic, data integrity, data manager, data scientist, data warehouse, database administrator, etl, teradata