I recently had the privilege of participating in the "FinOps mรกs que costes" podcast to share my experience as CTO of DockTech implementing FinOps in a 12-person startup. The conversation made me reflect on how we've managed to make cost optimization not just a necessity, but a competitive advantage.

The Hard Reality: Every Line of Code Matters

In my 20+ years of technology experience, I've witnessed how the relationship between engineering and costs has evolved. Before, as programmers, we cared little about costs. We had a finance department that handled that. But in a startup, especially in a niche like maritime where you can't easily access large funding rounds, every engineering decision is literally a purchasing decision.

Working primarily with serverless and Lambda, I've learned that a poorly optimized line of code can turn into thousands of dollars per year. A synchronous loop that waits unnecessarily, excessive logs filling CloudWatch, or poor connection management can spike costs in unpredictable ways.

Building a FinOps Culture from the DNA

The key isn't just implementing cost monitoring tools, but fundamentally changing how we think about development. In any early-stage startup, you must establish that:

Every programmer is responsible for the costs of what they design. When a developer presents the design of a new feature, they must include not only functional and non-functional requirements, but also a cost estimate using tools like the AWS Cost Calculator.

This isn't easy. Indirect costs are especially complex to predict. In one early-stage company I worked with, they implemented stream processing with Flink, calculated direct costs well, but didn't anticipate that GuardDuty would analyze all those intensive operations, triggering unexpected costs.

Total Visibility: The Power of Transparency

One of our most important decisions was giving complete visibility to the entire team about our cloud spending. Every person in the company can see how much we pay for AWS, MongoDB Atlas, and other services. This transparency creates awareness and collective responsibility.

We also try to understand our "cloud efficiency rate" even before having significant revenue. We need to know what percentage of our future income will go to infrastructure costs to make informed business decisions.

Code Reviews with Cost Consciousness

We've integrated cost considerations into our code review process. When reviewing code, we don't just look for bugs or style issues, but also potential cost impacts. Is this synchronous call necessary? Are we logging too much information? Could this function be optimized to reduce execution time?

This level of scrutiny might seem excessive, but when you're working with limited runway, every optimization counts.

The Vision: Static Code Analysis for FinOps

I'm passionate about static code analysis for security, and I believe the same principles should apply to FinOps. Imagine having tools that could analyze your code before deployment and flag potential cost issues, just like we do for security vulnerabilities.

This could be part of the CI/CD pipeline โ€“ if your code doesn't pass the cost efficiency check, it doesn't get deployed. We're not there yet, but I see this as the future of FinOps tooling.

Learning from Economic Cycles

The current economic environment, with higher interest rates and tighter capital, has actually been beneficial for startups that embrace FinOps early. While it's more challenging to raise money, companies that build cost efficiency into their DNA are better positioned for the long term.

As I mentioned in the podcast, everything is cyclical. We know the easy money will return eventually. But companies that survive and thrive during the tough times will be the ones that seize opportunities when the market improves.

The Bigger Picture: Design Beauty and Cost Efficiency

I believe there's a strong correlation between good design and cost efficiency. A system with harmonious architecture, with a certain beauty in its design, usually has a relationship with both product quality and cost optimization. When you optimize for costs thoughtfully, you often end up with better overall system design.

Key Takeaways for Startup CTOs

  1. Make cost responsibility part of every engineer's job description
  2. Implement complete cost transparency across the organization
  3. Include cost estimation in your design review process
  4. Set up anomaly alerts โ€“ they'll save you from unexpected spikes
  5. Think about unit economics from day one, even before significant revenue

The reality is that I'd prefer to have more capital and not worry about these details. But we don't have that luxury. This constraint has forced us to build better, more efficient systems, and ultimately, I believe it's made us a stronger company.

FinOps isn't just about cutting costs โ€“ it's about building sustainable, efficient technology that supports long-term business success. In a startup, that discipline can be the difference between survival and failure.

FinOps in the Age of AI: The New Cost Frontier

As AI becomes central to many startup offerings, we're facing an entirely new category of cost management challenges. Unlike traditional cloud resources that charge for compute time or storage, AI services operate on token economics โ€“ and this changes everything.

The Token Economics Revolution

Every interaction with an LLM costs money based on both input and output tokens. This means prompt engineering becomes cost engineering. A verbose prompt consuming 200 tokens versus an optimized 50-token version doesn't just improve response time โ€“ it cuts costs by 75%.

I've observed startups reduce AI costs by 60% through simple prompt optimization: being specific about output length, removing unnecessary context, and crafting precise instructions. When you're paying per token, every word has a price tag.

Smart Caching Strategy

The most underutilized AI cost optimization is caching. Implement semantic caching for similar user queries, cache expensive embedding generations, and leverage provider-specific prompt caching for repeated system messages. A well-implemented caching layer can reduce AI costs by 40-80% while improving response times.

Context Window Management

Maintaining conversation history or large context windows can exponentially increase costs. Implement intelligent context pruning โ€“ keep only essential information for the task. Consider breaking complex operations into smaller, focused AI calls rather than one expensive, context-heavy request.

The key is treating AI costs with the same rigor we apply to traditional infrastructure. Monitor token usage by feature, track cost per user interaction, and always consider the model performance versus cost trade-off. Sometimes a smaller, faster model at 10% the cost delivers 90% of the value.

In the AI era, FinOps expertise isn't just about infrastructure anymore โ€“ it's about understanding the economics of intelligence itself.