Free Store Performance Audit
Login
Sign Up

Ops Data Engineer

Apply

Who You Are:

You are a curious, driven, and detail-oriented individual who is passionate about working with data and solving operational challenges. You thrive in fast-paced environments, enjoy troubleshooting, and have a keen interest in building and optimizing data pipelines. You are looking for a hands-on role where you can learn and grow as a Data Engineer while contributing directly to the success of a growing SaaS company. You’re comfortable managing daily operations and have a solid foundation in data engineering tools such as SQL, Python, Data Visualization, ETL and Cloud solutions.

The Opportunity:

As an Ops Data Engineer, you will play a key role in supporting daily operations across various platforms, including Alteryx, Tableau, Postgres, and AWS. You will monitor data processes, troubleshoot issues, and help improve and automate operational workflows. This role provides the opportunity to work closely with the founding team and engineers, giving you direct exposure to the inner workings of a data-driven startup. You’ll gain valuable experience in managing large-scale data infrastructure and building scalable ETL pipelines, all while helping drive product development at YDP.

Who We Are:

At Your Data Playbook (YDP), we believe data is the most valuable asset of the 21st century. Our mission is to unlock this potential and deliver meaningful growth for eCommerce entrepreneurs worldwide.
We empower entrepreneurs by transforming their data into actionable intelligence, enabling them to identify and prioritize the most impactful opportunities for growth.
Join us in creating a future where harnessing data fuels success for every entrepreneur.

What You'll Do:

  • Execute recurring operational tasks: Follow well-documented SOPs to carry out daily and weekly workflows reliably and on schedule. These include member data validation, dataset completeness checks, and data delivery processes.
  • Troubleshoot and resolve Tier-1 issues: Respond to customer inquiries, support Customer Success on technical requests, and perform basic data QA checks to ensure data quality.
  • Investigate and resolve IROPS-level issues: Lead root-cause investigations for backend failures such as pipeline disruptions, server crashes, or critical data anomalies. Coordinate recovery efforts and document incident outcomes.
  • Monitor infrastructure performance and alerting systems: Maintain and enhance dashboards, alerts, and monitoring tools to ensure system stability and performance.
  • Proactively analyze trends and capacity needs: Review usage metrics such as execution time, disk space, memory usage, and growth rates to recommend performance optimizations and prevent potential incidents.
  • Maintain and manage software licensing: Oversee Alteryx and Tableau license usage and renewals, ensuring compliance across environments.
  • Automate and optimize operational workflows: Enhance and streamline both current operational workflows and manual tasks to reduce effort and increase reliability. Improve server performance by tuning AWS services and cloud infrastructure, as well as refining alert systems to reduce bottlenecks.
  • Recommend and apply process improvements: Use insights from daily operations to propose and implement enhancements using modern tools (AI-first agents, serverless architecture, RAG, LLMs, etc.) that reduce manual intervention and improve system reliability. 

Must-Haves in a Candidate:

  • 2+ years of hands-on experience with data engineering tools such as SQL, Python, Alteryx, PostgreSQL, AWS, Tableau or related technologies.
  • 2+ years of hands-on experience building and optimizing scalable ETL pipelines using tools like Alteryx, Informatica, dbt or equivalent.
  • Experience with Python for scripting and automation.
  • Experience with relational databases like PostgreSQL, including troubleshooting and optimizing database performance.
  • Experience troubleshooting operational errors: Familiarity with common issues such as server outages, disk space management, and workflow failures.
  • Strong communication skills: Ability to effectively manage expectations and communicate operational updates with internal teams.
  • Attention to detail: You enjoy creating and maintaining accurate data pipelines and performing data validation checks.
  • Proactive problem-solving: Ability to handle operational disruptions and recovery processes, with a mindset for continuous improvement.

Nice to Have in a Candidate:

  • Familiarity with cloud platforms: AWS, Google Cloud, or equivalent.
  • Ability to create data visualizations: Using Tableau or other visualization tools.
  • Interest in web scraping and API integration: Building and maintaining systems that collect data.
  • Basic knowledge of HTML and CSS
  • Experience with Docker, Terraform, or similar technologies.

Details:

  • Job Type: Full-time.
  • Location: US, Panama, Remote. 
  • Salary: Competitive and dependent on experience.
  • Benefits: Opportunity for growth and the ability to shape the future of the company.
Start Your Application