LLM Benchmark V2: Preview of the New Benchmark Generation

This benchmark report is set to be an exciting journey. We’ll begin by exploring key performance benchmarks and wrap up with a forecast of Nvidia’s stock price (please note: this is not financial advice).

  • Second Generation Benchmark - Early Preview

  • DeepSeek r1

  • Cost and Price Dynamics of DeepSeek r1

LLM Benchmark Gen2 - Early Preview

In recent months, we’ve been heavily revising our first-generation LLM Benchmark. Gen1 focused on business workload automation but relied on insights from AI use cases completed in 2023.

In the final months of the benchmark results, this reliance began to show as saturation in the top scores (with too many models achieving high marks). Additionally, the test cases had become somewhat outdated. They no longer reflected the latest insights gathered over the past year through our AI research and practical work with companies in the EU and USA.

So we’ve been building a new generation of benchmark to incorporate both new LLM capabilities and new insights. The timing was just perfect—o1 pro came out to challenge the complexity of the benchmark, while DeepSeek r1 shortly thereafter introduced the concept of accessible reasoning.

Here’s an early preview of our v2 benchmark. It may not seem like much on the surface, but it already deterministically compares models on complex business tasks while allowing each model to engage in reasoning before delivering an answer.

We’ll go into the DeepSeek r1 analysis in a bit; for now, let’s focus on the benchmark itself.
Here’s the current progress and an overview of what we plan to include:

  • Current progress:

    • ~10% of relevant AI cases are currently mapped to v2.
    • As we progress towards 100%, results will become more representative of AI/LLM applications in modern business workflows.
       
  • Structured Outputs:

    • Using structured outputs (following a predefined schema precisely) is a common industry standard.
    • Supported by OpenAI, Google, and local inference models, making constrained decoding a key feature of our benchmark.
    • Locally capable models are included wherever applicable.
       
  • Focus on business tasks:

    • Benchmarks currently focus on tasks requiring multiple logical steps in a single prompt.
    • Not every complex AI project needs full creative autonomy—this is counterproductive for tasks like regulatory compliance.
    • Smaller, locally deployable models can often perform better in such cases by following an auditable reasoning path.
       
  • Future additions:

    • We will reintroduce simpler logical tasks over time.
    • A new "plan generation" category will be added, allowing for deeper analysis of workflow-oriented LLM use.
    • Our ultimate goal: evaluate if powerful, cloud-based models can be replaced by simpler, local models that follow structured workflows.
       
  • Customer and partner insights:

    • Customers and partners have requested access to benchmark details for inspiration and guidance in their projects.
    • Unlike v1, v2 will include clean, non-restricted test cases that we can share upon request.
       
  • Current categories:

    • Only a few categories of test cases are included so far.
    • Additional categories will be added as the benchmark evolves.
       

The full journey will take a few months, but the hardest part—bringing many moving pieces together into a single coherent framework—is done!
From here on, LLM Benchmark v2 will only continue to improve.

DeepSeek r1

Let’s talk about the elephant in the room. DeepSeek r1 is the new Chinese model, much faster and cheaper than OpenAI’s winning o1 model. In addition to being locally deployable (anyone can download it), it is also designed to be smarter.

No surprise that stocks took a hit after these developments.

Let’s start with its reasoning capabilities. According to our benchmarks, DeepSeek r1 performs exceptionally well:

  • It outperforms almost all variants of OpenAI’s 4o models.
  • It surpasses any open-source model.
  • However, it still lags behind OpenAI’s o1 and GPT-4o (August 2024 edition).

Also remember that the base DeepSeek r1 is a Mixture of Experts model containing 685B parameters in total (which means you’ll need sufficient GPU capacity to handle them all). When comparing its score progression to other large open-source models, the progress appears roughly proportional to its size.

Do you see a smaller elephant in the room that breaks this pattern? It’s the distillation of DeepSeek r1’s capabilities into Llama 70B! This locally deployable model isn’t the one everyone is focusing on, but it could potentially be the most significant development.

If you can enhance any good foundational model by distilling r1’s reasoning capabilities and allowing it to reason before producing an answer, it presents an attractive alternative. This approach could make common models faster and more efficient.

To summarise:

The DeepSeek r1 model is really good, but it’s not yet good enough to directly compete with OpenAI’s o1. Its immediate challenge is to outperform OpenAI’s 4o models before moving on to higher competition.

The technology behind DeepSeek r1 is promising and will likely lead to a new generation of more efficient reasoning models derived from distillation approaches built on its foundation. This aligns with a prediction we made in December: AI vendors will increasingly provide reasoning capabilities similar to OpenAI’s o1 models as a shortcut to quickly improve model performance. The idea is simple—allocate more compute, let the model take longer to reason before answering, and charge more for API usage. This workaround allows for accuracy improvements without requiring heavy investments in training new foundational models.

However, we also foresee that the current hype around smart reasoning models that are extremely expensive will eventually fade. Their practicality is limited, and they will likely give way to more cost-effective solutions.

 

Cost and Price Dynamics of DeepSeek r1

DeepSeek r1 offers a cost-effective pricing model. The cost per 1 million input tokens is just $0.55, while 1 million output tokens are priced at $2.19. This affordability positions it as a competitive option in the market, especially for those seeking locally deployable AI models.

This is significantly cheaper than OpenAI’s o1 or 4o pricing. Let’s break it down in a table to make things clearer.

We’ll also calculate the total price for a common business workload with a typical 10:1 ratio—10 million input tokens and 1 million output tokens. This ratio is common in data extraction and retrieval-augmented generation (RAG) systems, which are prevalent in our AI use cases.

Model

 

1M Input Tokens

 

1M Output Tokens

 

Cost of 10M:1M

 

DeepSeek r1

$0.55

$2.19

$7.69

OpenAI gpt-4o

$2.5

$10

$35

OpenAI o1

$15.0

$60

$210

At this point, we can confidently say that the pricing of DeepSeek r1 blows everyone else out of the water. It’s not just “25x cheaper than OpenAI o1” for common business workloads—it’s 27x cheaper.

However, the devil is in the details. The currently offered price might not perfectly reflect the actual market price or the real cost of running the business due to various factors.

First of all, can DeepSeek even handle all the demand? According to their status page, the API has been in “Major Outage” mode since January 27th. This means they aren’t actually serving all LLM requests at the advertised price.

In general, if you examine the financial incentives of DeepSeek as a company, you might find that turning a profit may not be their primary motivation. DeepSeek is owned by a Chinese high-flyer hedge fund (see Wikipedia), so theoretically, they could potentially make more money by shorting Nvidia. But let’s set that theory aside for now.

Still, it’s an interesting coincidence that January 27th, the day their API entered “Major Outage” mode, is also the same day Nvidia’s stock took a nosedive.

To take a deeper dive into LLM price dynamics, we can refer to a popular LLM marketplace called OpenRouter.

OpenRouter conveniently aggregates multiple providers under a single API, creating an open market for LLM-as-a-service offerings. Since DeepSeek r1 is an open-source model, multiple providers can deploy and serve it at their own pricing, allowing supply and demand to naturally balance the market.

Here’s what the pricing looks like for the top-rated providers of DeepSeek r1 at the time of writing this article (“nitro” refers to providers that can handle certain workloads):

As you can see, DeepSeek attempts to offer its API at the advertised prices, but with a few nuances:

As a cost-cutting measure, it limits its input and output capacities to a fraction of what other providers offer. Compare the “Context” and “Max Output” sizes, but also keep in mind that DeepSeek’s original pricing includes a 32K Reasoning token limit in addition to an 8K Output Limit.

Normally, OpenRouter routes requests to the cheapest provider, letting market dynamics determine the flow. However, the DeepSeek r1 API hasn’t been able to keep up with the current demand and has been explicitly de-ranked with the message: “Users have reported degraded quality. Temporarily deranked.”

Meanwhile, alternative competitors—who are more motivated to generate profit—charge noticeably higher prices per input and output tokens. Despite the higher costs, they can handle the demand and maintain consistently high throughput.

Effectively, the current market price for stable access to DeepSeek r1 is around $7 to $8 per 1M Input/Output tokens. For an average 10:1 workload (10M input tokens and 1M output tokens), this results in a total cost of $77.

This is twice as expensive as the cost of using a similarly capable GPT-4o, which comes in at $35 for the same workload.

These estimates are based on the current market price and don’t necessarily reflect the real cost of running DeepSeek r1 if you were to deploy it independently. Let’s explore the latest report from Nvidia, which highlights running DeepSeek r1 on the latest NVIDIA HGX H200 at a speed of 3,872 tokens per second using native FP8 inference to achieve this performance.

Assuming a 2-year rental cost of $16 per hour for an HGX H200 in Silicon Valley, and running the optimized software stack at ideal capacity, this results in a cost of $1.15 per 1M input/output tokens. For a 10:1 workload, that’s $12.65 per workload, which is higher than the $7.69 price that DeepSeek r1 currently advertises.

However, DeepSeek doesn’t seem to have access to the latest Nvidia hardware like the HGX H200. They are reportedly limited to H800 GPUs, an export version of the H100 with reduced memory bandwidth, which could drive their actual running costs even higher.

No matter how we analyze the numbers, the same picture emerges:

We don’t see how DeepSeek r1 could truly be 25x cheaper than OpenAI’s o1 unless its pricing is heavily subsidized. However, subsidized prices and high market demand don’t usually mix well in the long term.

On our early v2 LLM benchmark, DeepSeek r1 demonstrates reasoning capabilities comparable to an older OpenAI GPT-4o from August 2024. It’s not on the same level as OpenAI’s o1 yet.

Additionally, both OpenAI’s o1 and 4o models are multi-modal and natively support working with images and complex documents, whereas DeepSeek r1 is limited to text-only inputs. This further separates them, especially when dealing with document-oriented business workloads.

Given this, the recent stock market reaction to a promising but limited Chinese text-only model—comparable to an older OpenAI version and sold at a subsidized price—might be an overreaction. The trend toward multi-modal foundational models that understand the world beyond text is where future value lies, offering Nvidia and its partners new opportunities to generate significant returns.

Stock Market Prediction (not financial advice): Nvidia will rebound and continue to grow quickly, driven by real-world workloads and sustainable cost dynamics.

Meanwhile, the DeepSeek r1 model is still interesting and could help OpenAI’s competitors close the gap (especially since there haven’t been major innovations recently from Anthropic or Sonnet). However, DeepSeek r1 itself may fade away due to its cost dynamics, as its distilled versions are already showing stronger potential in our new Benchmark v2.

Do you want to put your RAG to the test? We are planning to run the second round of our Enterprise RAG Challenge at the end of February!

Enterprise RAG Challenge is a friendly competition where we compare how different RAG architectures answer questions about business documents.

We had the first round of this challenge last summer. Results were impressive - just with 16 participating teams we were able to compare different RAG architectures and discover the power of using structured outputs in business tasks.

The second round is scheduled for February 27th. Mark your calendars!

Transform Your Digital Projects with the Best AI Language Models!

Discover the transformative power of the best LLMs and revolutionize your digital products with AI! Stay future-focused, boost efficiency, and gain a clear competitive edge. We help you elevate your business value to the next level.

 

* required

Wir verwenden die von Ihnen an uns gesendeten Angaben nur, um auf Ihren Wunsch hin mit Ihnen Kontakt im Zusammenhang mit Ihrer Anfrage aufzunehmen. Alle weiteren Informationen können Sie unseren Datenschutzhinweisen entnehmen.

Blog
Blog

ChatGPT & Co: LLM Benchmarks for December

Find out which large language models outperformed in the December 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Blog
Blog

ChatGPT & Co: LLM Benchmarks for September

Find out which large language models outperformed in the September 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Blog
Blog

ChatGPT & Co: LLM Benchmarks for November

Find out which large language models outperformed in the November 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Blog
Blog

ChatGPT & Co: LLM Benchmarks for October

Find out which large language models outperformed in the October 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Rinat AbdullinRinat AbdullinBlog
Blog

Open-sourcing 4 solutions from the Enterprise RAG Challenge

Our RAG competition is a friendly challenge different AI Assistants competed in answering questions based on the annual reports of public companies.

Rinat AbdullinRinat AbdullinBlog
Blog

LLM Performance Series: Batching

Beginning with the September Trustbit LLM Benchmarks, we are now giving particular focus to a range of enterprise workloads. These encompass the kinds of tasks associated with Large Language Models that are frequently encountered in the context of large-scale business digitalization.

Rinat AbdullinRinat AbdullinBlog
Blog

Strategic Impact of Large Language Models

This blog discusses the rapid advancements in large language models, particularly highlighting the impact of OpenAI's GPT models.

Jörg EgretzbergerJörg EgretzbergerBlog
Blog

8 tips for developing AI assistants

AI assistants for businesses are hype, and many teams were already eagerly and enthusiastically working on their implementation. Unfortunately, however, we have seen that many teams we have observed in Europe and the US have failed at the task. Read about our 8 most valuable tips, so that you will succeed.

Martin WarnungMartin WarnungBlog
Blog

Common Mistakes in the Development of AI Assistants

How fortunate that people make mistakes: because we can learn from them and improve. We have closely observed how companies around the world have implemented AI assistants in recent months and have, unfortunately, often seen them fail. We would like to share with you how these failures occurred and what can be learned from them for future projects: So that AI assistants can be implemented more successfully in the future!

Blog
Blog

AI Contest - Enterprise RAG Challenge

TIMETOACT GROUP Austria demonstrates how RAG technologies can revolutionize processes with the Enterprise RAG Challenge.

Aqeel AlazreeBlog
Blog

Part 1: Data Analysis with ChatGPT

In this new blog series we will give you an overview of how to analyze and visualize data, create code manually and how to make ChatGPT work effectively. Part 1 deals with the following: In the data-driven era, businesses and organizations are constantly seeking ways to extract meaningful insights from their data. One powerful tool that can facilitate this process is ChatGPT, a state-of-the-art natural language processing model developed by OpenAI. In Part 1 pf this blog, we'll explore the proper usage of data analysis with ChatGPT and how it can help you make the most of your data.

Felix KrauseBlog
Blog

AIM Hackathon 2024: Sustainability Meets LLMs

Focusing on impactful AI applications, participants addressed key issues like greenwashing detection, ESG report relevance mapping, and compliance with the European Green Deal.

Matus ZilinskyBlog
Blog

Creating a Social Media Posts Generator Website with ChatGPT

Using the GPT-3-turbo and DALL-E models in Node.js to create a social post generator for a fictional product can be really helpful. The author uses ChatGPT to create an API that utilizes the openai library for Node.js., a Vue component with an input for the title and message of the post. This article provides step-by-step instructions for setting up the project and includes links to the code repository.

Blog
Blog

Second Place - AIM Hackathon 2024: Trustpilot for ESG

The NightWalkers designed a scalable tool that assigns trustworthiness scores based on various types of greenwashing indicators, including unsupported claims and inaccurate data.

Blog
Blog

Third Place - AIM Hackathon 2024: The Venturers

ESG reports are often filled with vague statements, obscuring key facts investors need. This team created an AI prototype that analyzes these reports sentence-by-sentence, categorizing content to produce a "relevance map".

Rinat AbdullinRinat AbdullinBlog
Blog

Let's build an Enterprise AI Assistant

In the previous blog post we have talked about basic principles of building AI assistants. Let’s take them for a spin with a product case that we’ve worked on: using AI to support enterprise sales pipelines.

Blog
Blog

SAM Wins First Prize at AIM Hackathon

The winning team of the AIM Hackathon, nexus. Group AI, developed SAM, an AI-powered ESG reporting platform designed to help companies streamline their sustainability compliance.

Rinat AbdullinRinat AbdullinBlog
Blog

So You are Building an AI Assistant?

So you are building an AI assistant for the business? This is a popular topic in the companies these days. Everybody seems to be doing that. While running AI Research in the last months, I have discovered that many companies in the USA and Europe are building some sort of AI assistant these days, mostly around enterprise workflow automation and knowledge bases. There are common patterns in how such projects work most of the time. So let me tell you a story...

Rinat AbdullinRinat AbdullinBlog
Blog

The Intersection of AI and Voice Manipulation

The advent of Artificial Intelligence (AI) in text-to-speech (TTS) technologies has revolutionized the way we interact with written content. Natural Readers, standing at the forefront of this innovation, offers a comprehensive suite of features designed to cater to a broad spectrum of needs, from personal leisure to educational support and commercial use. As we delve into the capabilities of Natural Readers, it's crucial to explore both the advantages it brings to the table and the ethical considerations surrounding voice manipulation in TTS technologies.

Aqeel AlazreeBlog
Blog

Part 3: How to Analyze a Database File with GPT-3.5

In this blog, we'll explore the proper usage of data analysis with ChatGPT and how you can analyze and visualize data from a SQLite database to help you make the most of your data.

Aqeel AlazreeBlog
Blog

Part 4: Save Time and Analyze the Database File

ChatGPT-4 enables you to analyze database contents with just two simple steps (copy and paste), facilitating well-informed decision-making.

Workshop
Workshop

AI Workshops for Companies

Whether it's the basics of AI, prompt engineering, or potential scouting: our diverse AI workshop offerings provide the right content for every need.

TIMETOACT
Martin LangeMartin LangeBlog
Checkliste als Symbol für die verschiedenen To Dos im Bereich Lizenzmanagement
Blog

License Management – Everything you need to know

License management is not only relevant in terms of compliance but can also minimize costs and risks. Read more in the article.

Felix KrauseBlog
Blog

License Plate Detection for Precise Car Distance Estimation

When it comes to advanced driver-assistance systems or self-driving cars, one needs to find a way of estimating the distance to other vehicles on the road.

Rinat AbdullinRinat AbdullinBlog
Blog

5 Inconvenient Questions when hiring an AI company

This article discusses five questions you should ask when buying an AI. These questions are inconvenient for providers of AI products, but they are necessary to ensure that you are getting the best product for your needs. The article also discusses the importance of testing the AI system on your own data to see how it performs.

Aqeel AlazreeBlog
Blog

Database Analysis Report

This report comprehensively analyzes the auto parts sales database. The primary focus is understanding sales trends, identifying high-performing products, Analyzing the most profitable products for the upcoming quarter, and evaluating inventory management efficiency.

TIMETOACT
Referenz
Referenz

Managed service support for optimal license management

To ensure software compliance, TIMETOACT supports FUNKE Mediengruppe with a SAM Managed Service for Microsoft, Adobe, Oracle and IBM.

TIMETOACT
Referenz
Referenz

Interactive online portal identifies suitable employees

TIMETOACT digitizes several test procedures for KI.TEST to determine professional intelligence and personality.

Branche
Branche

Artificial Intelligence in Treasury Management

Optimize treasury processes with AI: automated reports, forecasts, and risk management.

TIMETOACT
Referenz
Referenz

Standardized data management creates basis for reporting

TIMETOACT implements a higher-level data model in a data warehouse for TRUMPF Photonic Components and provides the necessary data integration connection with Talend. With this standardized data management, TRUMPF will receive reports based on reliable data in the future and can also transfer the model to other departments.

TIMETOACT
Referenz
Referenz

Flexibility in the data evaluation of a theme park

With the support of TIMETOACT, an theme park in Germany has been using TM1 for many years in different areas of the company to carry out reporting, analysis and planning processes easily and flexibly.

Referenz
Referenz

Automated Planning of Transport Routes

Efficient transport route planning through automation and seamless integration.

Rinat AbdullinRinat AbdullinBlog
Blog

Using NLP libraries for post-processing

Learn how to analyse sticky notes in miro from event stormings and how this analysis can be carried out with the help of the spaCy library.

Rinat AbdullinRinat AbdullinBlog
Blog

Machine Learning Pipelines

In this first part, we explain the basics of machine learning pipelines and showcase what they could look like in simple form. Learn about the differences between software development and machine learning as well as which common problems you can tackle with them.

Felix KrauseBlog
Blog

Boosting speed of scikit-learn regression algorithms

The purpose of this blog post is to investigate the performance and prediction speed behavior of popular regression algorithms, i.e. models that predict numerical values based on a set of input variables.

Daniel WellerBlog
Blog

Revolutionizing the Logistics Industry

As the logistics industry becomes increasingly complex, businesses need innovative solutions to manage the challenges of supply chain management, trucking, and delivery. With competitors investing in cutting-edge research and development, it is vital for companies to stay ahead of the curve and embrace the latest technologies to remain competitive. That is why we introduce the TIMETOACT Logistics Simulator Framework, a revolutionary tool for creating a digital twin of your logistics operation.

Laura GaetanoBlog
Blog

Using a Skill/Will matrix for personal career development

Discover how a Skill/Will Matrix helps employees identify strengths and areas for growth, boosting personal and professional development.

Felix KrauseBlog
Blog

Part 2: Detecting Truck Parking Lots on Satellite Images

In the previous blog post, we created an already pretty powerful image segmentation model in order to detect the shape of truck parking lots on satellite images. However, we will now try to run the code on new hardware and get even better as well as more robust results.

Sebastian BelczykBlog
Blog

Building A Shell Application for Micro Frontends | Part 4

We already have a design system, several micro frontends consuming this design system, and now we need a shell application that imports micro frontends and displays them.

Felix KrauseBlog
Blog

Creating a Cross-Domain Capable ML Pipeline

As classifying images into categories is a ubiquitous task occurring in various domains, a need for a machine learning pipeline which can accommodate for new categories is easy to justify. In particular, common general requirements are to filter out low-quality (blurred, low contrast etc.) images, and to speed up the learning of new categories if image quality is sufficient. In this blog post we compare several image classification models from the transfer learning perspective.

Felix KrauseBlog
Blog

Part 1: Detecting Truck Parking Lots on Satellite Images

Real-time truck tracking is crucial in logistics: to enable accurate planning and provide reliable estimation of delivery times, operators build detailed profiles of loading stations, providing expected durations of truck loading and unloading, as well as resting times. Yet, how to derive an exact truck status based on mere GPS signals?

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 5

Master F# asynchronous workflows and parallelism. Enhance application performance with advanced functional programming techniques.

Rinat AbdullinRinat AbdullinBlog
Blog

State of Fast Feedback in Data Science Projects

DSML projects can be quite different from the software projects: a lot of R&D in a rapidly evolving landscape, working with data, distributions and probabilities instead of code. However, there is one thing in common: iterative development process matters a lot.

Rinat AbdullinRinat AbdullinBlog
Blog

Innovation Incubator Round 1

Team experiments with new technologies and collaborative problem-solving: This was our first round of the Innovation Incubator.

Daniel PuchnerBlog
Blog

How we discover and organise domains in an existing product

Software companies and consultants like to flex their Domain Driven Design (DDD) muscles by throwing around terms like Domain, Subdomain and Bounded Context. But what lies behind these buzzwords, and how these apply to customers' diverse environments and needs, are often not as clear. As it turns out it takes a collaborative effort between stakeholders and development team(s) over a longer period of time on a regular basis to get them right.

Aqeel AlazreeBlog
Blog

Part 2: Data Analysis with powerful Python

Analyzing and visualizing data from a SQLite database in Python can be a powerful way to gain insights and present your findings. In Part 2 of this blog series, we will walk you through the steps to retrieve data from a SQLite database file named gold.db and display it in the form of a chart using Python. We'll use some essential tools and libraries for this task.

Ian RussellIan RussellBlog
Blog

So, I wrote a book

Join me as I share the story of writing a book on F#. Discover the challenges, insights, and triumphs along the way.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 8

Discover Units of Measure and Type Providers in F#. Enhance data management and type safety in your applications with these powerful tools.

Rinat AbdullinRinat AbdullinBlog
Blog

Innovation Incubator at TIMETOACT GROUP Austria

Discover how our Innovation Incubator empowers teams to innovate with collaborative, week-long experiments, driving company-wide creativity and progress.

Nina DemuthBlog
Blog

They promised it would be the next big thing!

Haven’t we all been there? We have all been promised by teachers, colleagues or public speakers that this or that was about to be the next big thing in tech that would change the world as we know it.

Rinat AbdullinRinat AbdullinBlog
Blog

Part 1: TIMETOACT Logistics Hackathon - Behind the Scenes

A look behind the scenes of our Hackathon on Sustainable Logistic Simulation in May 2022. This was a hybrid event, running on-site in Vienna and remotely. Participants from 12 countries developed smart agents to control cargo delivery truck fleets in a simulated Europe.

Nina DemuthBlog
Blog

7 Positive effects of visualizing the interests of your team

Interests maps unleash hidden potentials and interests, but they also make it clear which topics are not of interest to your colleagues.

Ian RussellIan RussellBlog
Blog

Introduction to Web Programming in F# with Giraffe – Part 2

In this series we are investigating web programming with Giraffe and the Giraffe View Engine plus a few other useful F# libraries.

Rinat AbdullinRinat AbdullinBlog
Blog

Event Sourcing with Apache Kafka

For a long time, there was a consensus that Kafka and Event Sourcing are not compatible with each other. So it might look like there is no way of working with Event Sourcing. But there is if certain requirements are met.

Daniel PuchnerBlog
Blog

Make Your Value Stream Visible Through Structured Logging

Boost your value stream visibility with structured logging. Improve traceability and streamline processes in your software development lifecycle.

Ian RussellIan RussellBlog
Blog

Introduction to Web Programming in F# with Giraffe – Part 1

In this series we are investigating web programming with Giraffe and the Giraffe View Engine plus a few other useful F# libraries.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 6

Learn error handling in F# with option types. Improve code reliability using F#'s powerful error-handling techniques.

Christian FolieBlog
Blog

The Power of Event Sourcing

This is how we used Event Sourcing to maintain flexibility, handle changes, and ensure efficient error resolution in application development.

Christian FolieBlog
Blog

Designing and Running a Workshop series: An outline

Learn how to design and execute impactful workshops. Discover tips, strategies, and a step-by-step outline for a successful workshop series.

Laura GaetanoBlog
Blog

My Weekly Shutdown Routine

Discover my weekly shutdown routine to enhance productivity and start each week fresh. Learn effective techniques for reflection and organization.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 9

Explore Active Patterns and Computation Expressions in F#. Enhance code clarity and functionality with these advanced techniques.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F#

Dive into functional programming with F# in our introductory series. Learn how to solve real business problems using F#'s functional programming features. This first part covers setting up your environment, basic F# syntax, and implementing a simple use case. Perfect for developers looking to enhance their skills in functional programming.

Nina DemuthBlog
Blog

From the idea to the product: The genesis of Skwill

We strongly believe in the benefits of continuous learning at work; this has led us to developing products that we also enjoy using ourselves. Meet Skwill.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 11

Learn type inference and generic functions in F#. Boost efficiency and flexibility in your code with these essential programming concepts.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 2

Explore functions, types, and modules in F#. Enhance your skills with practical examples and insights in this detailed guide.

Peter SzarvasPeter SzarvasBlog
Blog

Why Was Our Project Successful: Coincidence or Blueprint?

“The project exceeded all expectations,” is one among our favourite samples of the very positive feedback from our client. Here's how we did it!

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 10

Discover Agents and Mailboxes in F#. Build responsive applications using these powerful concurrency tools in functional programming.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 4

Unlock F# collections and pipelines. Manage data efficiently and streamline your functional programming workflow with these powerful tools.

Bernhard SchauerBlog
Blog

ADRs as a Tool to Build Empowered Teams

Learn how we use Architecture Decision Records (ADRs) to build empowered, autonomous teams, enhancing decision-making and collaboration.

Ian RussellIan RussellBlog
Blog

Introduction to Partial Function Application in F#

Partial Function Application is one of the core functional programming concepts that everyone should understand as it is widely used in most F# codebases.In this post I will introduce you to the grace and power of partial application. We will start with tupled arguments that most devs will recognise and then move onto curried arguments that allow us to use partial application.

Jonathan ChannonBlog
Blog

Tracing IO in .NET Core

Learn how we leverage OpenTelemetry for efficient tracing of IO operations in .NET Core applications, enhancing performance and monitoring.

Christian FolieBlog
Blog

Designing and Running a Workshop series: The board

In this part, we discuss the basic design of the Miro board, which will aid in conducting the workshops.

Blog
Blog

My Workflows During the Quarantine

The current situation has deeply affected our daily lives. However, in retrospect, it had a surprisingly small impact on how we get work done at TIMETOACT GROUP Austria.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 7

Explore LINQ and query expressions in F#. Simplify data manipulation and enhance your functional programming skills with this guide.

Balazs MolnarBalazs MolnarBlog
Blog

Learn & Share video Obsidian

Knowledge is very powerful. So, finding the right tool to help you gather, structure and access information anywhere and anytime, is rather a necessity than an option. You want to accomplish your tasks better? You want a reliable tool which is easy to use, extendable and adaptable to your personal needs? Today I would like to introduce you to the knowledge management system of my choice: Obsidian.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 3

Dive into F# data structures and pattern matching. Simplify code and enhance functionality with these powerful features.

Daniel PuchnerBlog
Blog

How to gather data from Miro

Learn how to gather data from Miro boards with this step-by-step guide. Streamline your data collection for deeper insights.

Ian RussellIan RussellBlog
Blog

Introduction to Functional Programming in F# – Part 12

Explore reflection and meta-programming in F#. Learn how to dynamically manipulate code and enhance flexibility with advanced techniques.

Jonathan ChannonBlog
Blog

Understanding F# applicatives and custom operators

In this post, Jonathan Channon, a newcomer to F#, discusses how he learnt about a slightly more advanced functional concept — Applicatives.

Laura GaetanoBlog
Blog

5 lessons from running a (remote) design systems book club

Last year I gifted a design systems book I had been reading to a friend and she suggested starting a mini book club so that she’d have some accountability to finish reading the book. I took her up on the offer and so in late spring, our design systems book club was born. But how can you make the meetings fun and engaging even though you're physically separated? Here are a couple of things I learned from running my very first remote book club with my friend!

Christian FolieBlog
Blog

Running Hybrid Workshops

When modernizing or building systems, one major challenge is finding out what to build. In Pre-Covid times on-site workshops were a main source to get an idea about ‘the right thing’. But during Covid everybody got used to working remotely, so now the question can be raised: Is it still worth having on-site, physical workshops?

Rinat AbdullinRinat AbdullinBlog
Blog

Celebrating achievements

Our active memory can be like a cache of recently used data; fresh ideas & frustrations supersede older ones. That's why celebrating achievements is key for your success.