Data Engineer
Content + Source + Freshness • 12 Dec 2025 • 95% confidence
Offer value
High value due to the strategic importance of data engineering in decentralized technologies and the exciting work environment.
- Dynamic role at the intersection of AI and decentralized computing
- Strong career development opportunities
- Requires proficiency in multiple programming languages and cloud tools
Pros
- Opportunities to work at the forefront of decentralized computing and AI.
- Dynamic remote work environment with a focus on innovation.
- Competitive equity and token grant compensation.
Cons
- Requires strong technical skills in various programming languages.
- Nature of projects might evolve quickly, requiring adaptability.
- Potentially high workload in a fast-paced environment.
Who it's for
Mid-level to Senior • Fully remote
Good fit
- Experienced data engineers in cloud environments
- Tech-savvy professionals eager to work on innovative projects
- Candidates with a passion for decentralized technologies
Not recommended for
- Entry-level candidates without practical experience
- Individuals preferring slower-paced, stable work environments
- Those resistant to learning new technologies and frameworks
Motivation fit
Key skills
About the job
Data Engineer
Become a part of io.net, where your ideas and passion drive the technology of tomorrow.
ApplyYearly Salary
Job Type
FullTimeLocation
BangkokPosition: Data Engineer
About Us: We are a DePIN company revolutionizing decentralized compute and infrastructure. We are looking for smart engineers who thrive in dynamic environments and can help us build scalable, data-driven systems using Python, Kafka, PostgreSQL, and AWS.
Responsibilities:
Design and build scalable ETL pipelines to handle large volumes of data.
Develop and maintain data models and optimize database schemas.
Work with real-time data processing frameworks like Kafka.
Ensure data quality, consistency, and reliability across systems.
Collaborate with backend engineers and data scientists to deliver insights.
Monitor and troubleshoot data workflows to ensure high availability.
Qualifications:
Strong programming skills in Python or Java.
Experience with SQL and relational databases (e.g., PostgreSQL, MySQL).
Knowledge of data pipeline tools like Apache Airflow, Spark, or similar.
Familiarity with cloud-based data warehouses (e.g., Redshift, Snowflake).
Understanding of data modeling and data governance principles.
Nice to Have:
Hands-on experience with Kafka or other messaging platforms.
Familiarity with machine learning pipelines or AI-driven systems.
Knowledge of decentralized technologies and their data implications.
What We Offer:
An opportunity to work at the cutting edge of decentralized computing and AI.
A dynamic and supportive team that values innovation and collaboration.
Flexibility to work remotely from anywhere in the world.
A competitive compensation package, including equity and token grants.
Regular team off-sites in various global locations.
Who We Are:
io.net is at the forefront of decentralized computing, revolutionizing the industry by creating the world’s largest decentralized compute platform. Our innovative “Internet of GPUs” addresses the global shortage in compute power by pooling GPUs from various sources into one powerful platform. This allows ML engineers to access nearly unlimited computing resources at a fraction of the cost, fueling AI innovation and transforming the future of technology.
How to Apply:
If you’re excited about the opportunity to join a revolutionary company at the intersection of AI, blockchain, and decentralized computing, please visit us at io.net/careers for instructions on how to apply.
Join us at io.net and be a part of the revolution in decentralized computing!
