Minimum Requirement
- Bachelor’s degree in Computer Science or any other relevant field
- At least five (5) years of demonstrable practical experience in High-Performance Computing (HPC).
- Microsoft Certified Solutions Associate (MCSA) in Windows Server.
- Demonstrable experience in Hadoop
- Microsoft Certification in Azure Cloud.
- Microsoft Certified Professional (MCP) in Office 365
- Certification in ITIL foundation
- Practical experience in connecting to the Hadoop cluster server and using different MapReduce tools, such as Pig, Spark, and Hive, to perform parallel word tokenizing
- Demonstrable experience in working with High-Performance Computing systems; scalable, parallel architectures, and Linux operating system.
- Practical experience in using commands in setting up NameNode, DataNode, ResourceManager, and NodeManager in the HPC environment
- Demonstrable knowledge of advanced data storage technologies and high-speed network interfaces.
- Practical experience in managing Data into Hive Table from HDFS
- Ability to contribute to developing technical design on software or hardware implementation strategies.
- Ability to monitor system usage and performance statistics and to understand the impacts of operating system tuning parameters.
- Good working knowledge of scripting languages such as Python.
- Demonstrable experience in VMware and Hyper-V virtualization technologies.
- Experience with network security procedures and protocols.
- Ability to analyze requirements and determine computational resource impacts.
- Ability to analyze complex problems, interpret operational needs, and develop integrated creative solutions.
- Practical verbal and written communication skills.
Job Description and Responsibilities
- Provides systems support for advanced research computing environment and Open Content for Agriculture Platform (OCAP) implementation.
- Allocate and maintain dynamic computing resources as per the requirement of the Open Content for Agriculture Platform (OCAP)
- Install, integrate and manage HPC systems, clusters, operating systems, peripherals, and system interfaces
- Monitors system usage to ensure the HPC clusters operate at optimal performance and reliability levels.
- Undertake routine maintenance, document, and maintain the latest backup of the HPC environment.
- Work with data scientists and application developers to provision scalable computing resources.
- Identifies and resolves computer system anomalies and operational problems.
- Provides systems support for KALRO systems in collaboration with partners.
- Maintains an understanding of state-of-the-art computing systems and peripherals, ranging from server Operating Systems to scalable and parallel architectures.
- Works with users and other computational professionals in evaluating user requirements and configuring and deploying computational resources.
- Participate in training offered by computer hardware and software vendors to maintain an understanding of industry trends and evolving technology.
- Undertake any other related duties assigned by the ICT Director.