Big Data DevOps Engineer (Hadoop / Cassandra)On-site 4 days a week | Toronto, ON
Overview:We're looking for a DevOps Engineer with strong expertise in big data platforms, specifically Hadoop and/or Cassandra (both is a plus). This role requires proven experience administering, configuring, and troubleshooting large-scale distributed systems, as well as deep Linux administration and DevOps toolchain skills. Candidates must demonstrate hands-on engineering experience, not just surface-level administration.
Key Responsibilities: - Install, configure, and maintain Hadoop or Cassandra clusters in production environments.
- Manage and optimize performance, replication, failover, and disaster recovery for big data systems.
- Develop and maintain automation scripts (Bash, Python, PowerShell) for deployment and monitoring.
- Build and manage CI/CD pipelines (Jenkins, GitHub/Bitbucket, Nexus).
- Automate provisioning using Terraform, Ansible, Salt, or similar.
- Deploy and manage cloud infrastructure (Azure preferred; AWS/GCP a plus).
- Monitor and log systems using tools like Splunk, DataDog, ELK, etc.
- Ensure data security, scalability, and availability across environments.
Must-Have Skills: - 5-10 years of DevOps / Systems Engineering experience.
- Strong Linux administration background (Red Hat preferred).
- Hands-on experience with Hadoop or Cassandra (administration, scaling, troubleshooting).
- Scripting skills (Bash, Python, PowerShell).
- Experience with cloud platforms (Azure IaaS, AKS, ADLS, ADF; AWS/GCP a plus).
- Strong troubleshooting and engineering mindset with proven production experience.
Nice to Have: - Experience with both Hadoop and Cassandra.
- Exposure to Spark, Hive, HBase, or other big data ecosystems.
- HPC cluster experience.
- Familiarity with AutoSys, ServiceNow, JIRA, Confluence.
#itacceljobs