Linqia is the leader in the influencer marketing industry. We're a growing tech start-up, having experienced 100% year-over-year growth and break-even. At Linqia, we partner with the world's largest brands including Danonne, AB InBev, Kimberly-Clark, Unilever, and Walmart to build compelling and effective influencer marketing campaigns. Our AI-driven platform and team of experts are leading the transformation of influencer marketing. We value intelligence, recognize talent and have instilled a culture that supports career development and growth for our employees. We thrive on innovation and accountability, with a customer first attitude that adds true value to everything we touch. Our team members are smart, hard-working, have integrity and love to have fun as we play to win. Job: Experience level one to three years Location Anywhere in Colombia Employment type Full‑time contract **ABOUT THE ROLE**: Join a cloud‑native team that owns the entire software delivery life cycle on Amazon Web Services. You will combine deep Kubernetes expertise with Python and shell scripting to automate, monitor, and continuously improve the Linqia platform while driving FinOps practices to keep our cloud footprint efficient. Work in a GitOps culture where every change is delivered through pull requests and rolled out by automated pipelines. **WHAT YOU WILL DO**: - Design, maintain, and evolve our AWS account structure, VPC networking, IAM policies, security boundaries, and cost‑management controls using Terraform and the AWS console - Operate and harden production‑grade Kubernetes clusters on AWS EKS, including upgrades, service mesh, policy management, and multi‑cluster architectures driven by Argo CD - Build reusable infrastructure‑as‑code modules with Terraform that provision cloud resources in minutes while enforcing tagging standards and least‑privilege access - Create self‑service CI/CD pipelines in Jenkins and GitHub Actions for fast, safe releases with automated testing and promotion across environments - Deliver real‑time observability with Datadog, Prometheus, Grafana, CloudWatch, and OpenTelemetry, introducing Loki when deep log analytics are required - Deploy and scale streaming platforms such as Apache Kafka and stateful data stores like PostgreSQL, MySQL, and Elasticsearch on Kubernetes via Operators and StatefulSets or on hardened EC2 virtual machines, selecting the best deployment pattern for each workload - Support developers by maintaining Podman‑based local dev boxes and staging environments that mirror production, ensuring smooth hand‑off from local code to cloud‑native deployments - Implement FinOps practices: track and forecast AWS spend, enforce cost‑allocation tagging, identify rightsizing opportunities, manage Savings Plans or Reserved Instances, and build cost‑optimisation dashboards for engineering and finance stakeholders - Maintain secure networking layers with AWS load balancers, ingress controllers, service‑mesh policies, network policies, and zero‑trust principles - Write automation utilities and command‑line tools in Python and craft shell scripts that glue components and workflows together - Champion reliability through incident reviews, capacity planning, game days, chaos testing, and service‑level objective tracking - Collaborate in Agile rituals, plan sprints, refine backlog tickets, and pair with peers to spread DevOps and FinOps best practices **WHAT YOU BRING**: - Bachelor degree in Computer Science or equivalent practical experience - One to three years working with cloud infrastructure or platform engineering focused on AWS - Deep hands‑on experience with Kubernetes, preferably EKS, covering upgrades, networking, storage, RBAC, and custom resources - Proficiency in Python and Bash or Zsh scripting - Strong understanding of core AWS services EC2, VPC, IAM, ALB, S3, RDS, CloudFormation, and CloudWatch - Solid experience with Docker and container runtimes, with emphasis on Podman for local development environments - Hands‑on practice with configuration‑management tools such as Ansible or Puppet and infrastructure‑as‑code with Terraform - Proven use of Datadog for metrics, logs, and APM, plus familiarity with Prometheus and Grafana dashboards - Comfortable with Git‑based workflows, feature branching, and pull‑request reviews - Strong SQL skills and a deep understanding of relational database internals, plus familiarity with at least one search or analytics engine such as Elasticsearch - Competent in Linux administration, process troubleshooting, and performance tuning - Practical knowledge of TCP/IP, HTTP, TLS, DNS, and common networking tools - Clear communication skills and an ability to translate complex technical topics to diverse audiences - Familiarity with Scrum or Kanban and a continuous‑improvement mindset **EXTRA CREDIT**: - AWS certifications such as Solutions Architect, DevOps Engineer, or FinOps Practitioner - Experience with AWS security tooling GuardDuty, Security