Planning and executing a variety of methodologies as part of the concept stage in the overall project development of big data solutions. Creating GUI prototypes. Researching, planning and developing project strategies. Creating logical and physical data models. Designing the architecture, analyze and develop mappings, transformations, mapplets, sessions and workflows for integrating the data residing using ETL tools. Coordinating the building of solutions using Hadoop, Spark, Hive, Kafka, Pig, MapReduce and Python. Overseeing the implementation of test validations of the solutions. Ensuring the optimization of the developed solutions. Producing project documentation.
Requirements: Master’s Degree or foreign degree equivalent in Computer Science, Computer Information Systems, Computer Applications, Information Technology or Engineering and one year’s experience in position or one year’s experience in IT field (or Bachelor’s Degree and five years’ experience).
Special requirements: Experience with Hadoop, Spark, Hive, Kafka, Pig, MapReduce and Python. Travel to various unanticipated client sites required. May reside anywhere in the United States.
Any Applicant who is interested in this position may apply by regular mail (including Reference Number 14530) to:
Mr. Gordon Drijver
1001 East Palm Avenue
Tampa, FL 33605