AI Architecture Benchmarks

August 2024

This month we have prepared something special for you. Instead of benchmarking just the LLMs we present you a first benchmark of various AI architectures.

This was done as a first round of our Enterprise RAG Challenge. Within that challenge we worked together with individual consultants and some vendors of commercial AI solutions.

Industry Overview

First of all, we have mapped all proven cases of successful AI application (that are known to us) on a single map by industry and impact area.

Afterwards, we have reviewed the entire portfolio and identified any recurring themes that persist even between the industry and application boundaries. There were a few:

  • A lot of cases of successful application of AI in business are about using ChatGPT with a couple of simple LLM patterns: checklists, routers and knowledge maps. It can be surprising how much value can be achieved with just a few prompts and lines of code.

  • Most successful cases don’t act as standalone systems, but rather integrate into existing processes as copilots and assistants. Sometimes they are even invisible to the end users.

  • If we look at the numbers along, the most popular AI case is about building “AI Search” or “AI Assistants” for the business.

AI Search or AI Assistants are examples of use cases where a company wants a system that can provide intelligent answers based on files and documents. This is the most popular case and sometimes an entry point into AI for companies.

This is also one of the most controversial types of the project. A popular opinion is to implement a solution like that using vector databases and RAG systems. However, even if you stick to that opinion (which we don’t), there are so many different LLMs, frameworks and architectural nuances to pick from.

So how would one implement an AI Assistant over company documents?
 

Enterprise RAG Challenge

To answer that question in a collaborative manner, we have setup an Enterprise RAG Challenge. This is a friendly competition to test accuracy of different RAG systems in business workloads. It goes like this:

Participants build a system that can answer questions about uploaded PDF documents (annual reports). Or they can test their existing AI Assistant system that is already built.

Anybody can participate. Anonymous participation is also possible. All we ask - to share some details about your RAG implementation, for the benefit of the community. We would like to learn what works better in practice and share that with everybody.

When the competition starts:

  1. Participants get in advance a set of annual reports as PDFs. They can take some time to process them.

  2. List of questions for these files is generated. Within a few minutes (to avoid manual processing) participants will need to produce answers and upload them.

  3. Afterwards the answers are checked in public and compiled into a public dataset.

You will be able to compare performance of different teams and technologies (if team decided to answer a few questions) within a single table. We will also compile and publish a report at the end of the challenge.

You can read more about the competition on Github. The description there is somewhat geeky, since we went to a great length to make sure that the competition is fair for everybody.


Round 1

At the end of the summer we started a first trial run.

All information about the first round is publicly available on our Github page under the Apache license. Code of the question generator, file selector, rand seed selector and ranking - also available. Team submissions, too.

The teams received 20 Annual Reports in PDF form and were expected to automatically generate responses to questions like:

Which company had a higher total assets: "MITSUI O.S.K. LINES", "ENRG ELEMENTS LIMITED" or "First Mid Bancshares, Inc.", in the fiscal year 2021?

or:

What was the free cash flow of "Österreichische Kontrollbank" in the financial year 2023?

The last question is actually a trick question to test hallucinations. Österreichische Kontrollbank report has covered only year 2022. It is expected for models to refuse answering and return N/A in such cases.

A complete list of questions and the original annual reports can be found in the Github repository.

We got 17 submissions in total with some teams participating anonymously. Teams shared their architectures, LLM models and sometimes even more details:

Let’s review the table a bit:

Best solution - Checklist with GPT-4o

The highest scoring solution is from Daniel Weller. It scored 84 (out of max 100 points). Daniel is a colleague from TIMETOACT GROUP Austria.

ℹ️ We have taken great care to ensure that all participants compete under the same conditions (please read the the description on Github for more details) and to make the competition fair for everyone. For transparency, we will explicitly mark the affiliation to TIMETOACT in the TTA column.

In addition, some competitors also participate in the AI research program or benefit from its findings. These participants are marked in the AIR column for transparency.

Daniel has agreed to publish the source code for his solution. As soon as it is available, we will update the Github repository with the links. The status of the source code release can be seen in the source column.

Under the hood Daniel’s solution uses GPT-4o model with structured outputs. During pre-fill phase it benefits from the fact that possible types of questions were shared publicly with all the participants in a form of public question generator code. So we prepare a checklist with possible types of information to extract, enforce the data types with Structured Outputs and run against all documents to extract necessary information. Large documents are split based on the size.

During the question answering phase we go through each question and pass it to GPT-4o together with the pre-filled checklist data. Resulting answer is shaped into the proper schema by using structured outputs again.

The solution was a bit on the expensive side. Information prefill for 20 PDF consumes almost 6 dollars, while answering 40 questions took $2.44.

In this challenge we don’t place any limits on the cost of the solution, but encourage participants to capture and share cost data. Readers can then prioritise resulting solutions based on their own criteria.


Second Best - Classic RAG with GPT-4o

The second best solution came from Ilya Rice. It scored 76 points, achieving it with GPT-4o and a classical Langchain-based RAG. It used one of the best embedding models - text-embedding-3-large from OpenAI and custom Chain of Thought Prompts. The solution used fitz for text parsing, while chunking texts by character count.


Third Best Solution - Checklists with Gemini Flash

Third best solution was provided by Artem Nurmukhametov. His solution was architecturally similar to the solution of Daniel, but used multi-stage processing for the checklists. It used Gemini Flash model from Google to drive the system.

The solution was also on the expensive side, consuming $4 for the full challenge run.

As you have noticed 2 out of 3 top solutions have used Checklist pattern and Knowledge Mapping to benefit from the fact that the domain is already known in advance. While this is a common case in businesses (we can use Domain-Driven Design and iterative product development to capture similar level of detail), this puts classical RAG-systems at a disadvantage.

To compensate for that, in the next round of the Enterprise RAG Challenge, we will rework the question generator to have a lot more variability, making it prohibitively more expensive to “cheat” by simply using Knowledge mapping.


Best On-Premise Solution

As you have noticed, most of the solutions have used GPT-4o LLM from the OpenAI. According to our benchmarks, this is one of the best and most cost-effective LLMs currently available.

However, in the real world companies are sometimes interested in solutions that can run completely on the premises. This can be desired due to various reasons: cost, IP protection or compliance.

Locality comes at some cost - local models like Llama are less capable than the cloud-based models like OpenAI GPT-4 or Claude Sonnet 3.5. To compensate for that, local AI systems start leveraging advanced techniques that are sometimes possible only for the local models - precise guidance, fine-tuning (full tine-tuning, not the adapters that OpenAI employs), using custom mixtures and ensembles of experts or wide beam search.

It can be hard to compare effective accuracy of drastically different approaches. This Enterprise RAG Challenge allows to start comparing them against the same basis.

6th place is taken by a fully local system with a score of 69. Gap between this system and the winner is much less than what we expected!

Under the hood this system uses Qwen-72B LLM which is quite popular in some parts of Europe and Asia. Overall architecture is based on ReAct agent loops from LangChain with RAG-driven query engine. Table data from PDFs was converted to XML and the RecursiveCharacterTextSplitter was used for text chunking.

The table has two other solutions that can run fully on-the-premises. These are marked with a ⭐ in the "Local" column.


Round 2 - This Fall

The first round was done within a small circle of peers, to test-drive and polish the experience. The reception was much better than we expected.

We are planning to host next round of Enterprise RAG challenge later this fall. This round will be announced publicly, it will include a few small balance changes:

  • Question generator will be rebalanced to produce fewer questions that result in N/A answer. We’ll still keep a few around to catch hallucination cases.

  • We will generate more questions and ensure bigger diversity of possible questions. This will make the competition more challenging for the approaches based on Knowledge Mapping approach and Checklist LLM Pattern.

All changes will be made public and shared as open source before the start of the competition. Every participant will be able to use that knowledge to prepare for that competition.

In addition to that, source code of solutions from TIMETOACT GROUP will be shared for everybody to benefit from.

We will also try to gather more data from the participants and make it more consistent.

All of that should make the results from the next round more valuable, helping to push our shared understanding of what does it take to build high-quality AI solutions for the enterprise in practice.


Strategic Outlook

We are heading into the end of the summer holidays and a new busy period for the business. What can we expect from the future months in the world of “LLMs for the Enterprise”?

First, of all architectural approaches for solving customer problems will continue evolving. As we have seen from the RAG Challenge, there isn’t a single best option that clearly beats everybody. Radically different architectures are currently competing: solutions based on Knowledge Mapping, classical vector-based RAGs, systems with dedicated agents and knowledge graphs.

By looking at the architecture alone, it is not possible to tell in advance, if it will be the best solution. Number of lines of code is not a clear indicator either.

Based on the architecture alone, there is still room for the improvement of the quality in LLM-driven solutions.

However, LLM patterns and practices will not be the only factor driving future quality improvements. Let’s not forget that Large Language Models are continuously getting better and cheaper.

ℹ️ If you look at forum responses and online presence, ChatGPT and Anthropic Claude Chat keep on getting worse. Especially in the free tiers. However, what people frequently forget - these are user-facing products that are used for field-testing new versions of Large Language Models.

Companies are motivated to make the LLMs running underneath as cheap as possible. And that is exactly what OpenAI has done in recent years.

For the most part, companies use fixed, stable models via the API. These models have a predictable quality and do not suddenly deteriorate.

Let’s look at the progression of “LLM performance you can get for your money” throughout the time. We’ll show a chart that demonstrates that, based on the scores from our LLM Leaderboard.

In this chart we will group models not by their marketing names, but their provider and cost tier.

Here we can see an interesting pattern. For the same amount of money at different points in time we were able to get different accuracy.

In the first half of 2023 companies started releasing good models. Everybody started leveraging them and talking. After grabbing a share of the market, companies switched to cost-saving mode - releasing new less capable versions within the same tier. We wrote about that in multiple LLM Leaderboard reports.

Starting from the 2024, when even Google has joined the AI race, companies started working on the model quality again. They are releasing new models that work better for the same amount of money.

The progress looks quite steady so far, it repeats across multiple LLM vendors. This makes us believe that LLMs will continue improving their “bang-for-buck” ratio in the next 6 months as well.

What does this mean? It is a good time to be building LLM-driven systems to help businesses create move value. They already work nicely, but they will continue getting even better - both from the architectural improvements and from the releases of more capable LLMs.

We’ll continue tracking both perspectives in our monthly LLM Leaderboard.


LLM Benchmarks Archive

Interested in the benchmarks of the past months? You can find all the links on our LLM Benchmarks overview page!

Learn more

Transform your digital projects with the best AI language models!

Discover the transformative power of the best LLM and revolutionize your digital products with AI! Stay future-oriented, increase efficiency and secure a clear competitive advantage. We support you in taking your business value to the next level.

* required

We use the data you send us only for contacting you in connection with your request. You can find all further information in our privacy policy.


Blog
Blog

ChatGPT & Co: LLM Benchmarks for November

Find out which large language models outperformed in the November 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Blog
Blog

ChatGPT & Co: LLM Benchmarks for September

Find out which large language models outperformed in the September 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Blog
Blog

ChatGPT & Co: LLM Benchmarks for October

Find out which large language models outperformed in the October 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Martin WarnungMartin WarnungBlog
Blog

Common Mistakes in the Development of AI Assistants

How fortunate that people make mistakes: because we can learn from them and improve. We have closely observed how companies around the world have implemented AI assistants in recent months and have, unfortunately, often seen them fail. We would like to share with you how these failures occurred and what can be learned from them for future projects: So that AI assistants can be implemented more successfully in the future!

Jörg EgretzbergerJörg EgretzbergerBlog
Blog

8 tips for developing AI assistants

AI assistants for businesses are hype, and many teams were already eagerly and enthusiastically working on their implementation. Unfortunately, however, we have seen that many teams we have observed in Europe and the US have failed at the task. Read about our 8 most valuable tips, so that you will succeed.

Rinat AbdullinRinat AbdullinBlog
Blog

Open-sourcing 4 solutions from the Enterprise RAG Challenge

Our RAG competition is a friendly challenge different AI Assistants competed in answering questions based on the annual reports of public companies.

TIMETOACT
Referenz
Referenz

Standardized data management creates basis for reporting

TIMETOACT implements a higher-level data model in a data warehouse for TRUMPF Photonic Components and provides the necessary data integration connection with Talend. With this standardized data management, TRUMPF will receive reports based on reliable data in the future and can also transfer the model to other departments.

Rinat AbdullinRinat AbdullinBlog
Blog

So You are Building an AI Assistant?

So you are building an AI assistant for the business? This is a popular topic in the companies these days. Everybody seems to be doing that. While running AI Research in the last months, I have discovered that many companies in the USA and Europe are building some sort of AI assistant these days, mostly around enterprise workflow automation and knowledge bases. There are common patterns in how such projects work most of the time. So let me tell you a story...

TIMETOACT
Martin LangeMartin LangeBlog
Checkliste als Symbol für die verschiedenen To Dos im Bereich Lizenzmanagement
Blog

License Management – Everything you need to know

License management is not only relevant in terms of compliance but can also minimize costs and risks. Read more in the article.

TIMETOACT
Technologie
Headerbild zu IBM Watson Knowledge Studio
Technologie

IBM Watson Knowledge Studio

In IBM Watson Knowledge Studio, you train an Artificial Intelligence (AI) on specialist terms of your company or specialist area ("domain knowledge"). In this way, you lay the foundation for automated text processing of extensive, subject-related documents.

Rinat AbdullinRinat AbdullinBlog
Blog

The Intersection of AI and Voice Manipulation

The advent of Artificial Intelligence (AI) in text-to-speech (TTS) technologies has revolutionized the way we interact with written content. Natural Readers, standing at the forefront of this innovation, offers a comprehensive suite of features designed to cater to a broad spectrum of needs, from personal leisure to educational support and commercial use. As we delve into the capabilities of Natural Readers, it's crucial to explore both the advantages it brings to the table and the ethical considerations surrounding voice manipulation in TTS technologies.

Felix KrauseBlog
Blog

AIM Hackathon 2024: Sustainability Meets LLMs

Focusing on impactful AI applications, participants addressed key issues like greenwashing detection, ESG report relevance mapping, and compliance with the European Green Deal.

Blog
Blog

Third Place - AIM Hackathon 2024: The Venturers

ESG reports are often filled with vague statements, obscuring key facts investors need. This team created an AI prototype that analyzes these reports sentence-by-sentence, categorizing content to produce a "relevance map".

Blog
Blog

Second Place - AIM Hackathon 2024: Trustpilot for ESG

The NightWalkers designed a scalable tool that assigns trustworthiness scores based on various types of greenwashing indicators, including unsupported claims and inaccurate data.

Rinat AbdullinRinat AbdullinBlog
Blog

5 Inconvenient Questions when hiring an AI company

This article discusses five questions you should ask when buying an AI. These questions are inconvenient for providers of AI products, but they are necessary to ensure that you are getting the best product for your needs. The article also discusses the importance of testing the AI system on your own data to see how it performs.

TIMETOACT
Technologie
Headerbild zu IBM Watson Discovery
Technologie

IBM Watson Discovery

With Watson Discovery, company data is searched using modern AI to extract information. On the one hand, the AI uses already trained methods to understand texts; on the other hand, it is constantly developed through new training on the company data, its structure and content, thus constantly improving the search results.

TIMETOACT
Referenz
Referenz

Interactive online portal identifies suitable employees

TIMETOACT digitizes several test procedures for KI.TEST to determine professional intelligence and personality.

TIMETOACT
Technologie
Headerbild zu Cloud Pak for Data – Test-Drive
Technologie

IBM Cloud Pak for Data – Test-Drive

By making our comprehensive demo and customer data platform available, we want to offer these customers a way to get a very quick and pragmatic impression of the technology with their data.

Rinat AbdullinRinat AbdullinBlog
Blog

Let's build an Enterprise AI Assistant

In the previous blog post we have talked about basic principles of building AI assistants. Let’s take them for a spin with a product case that we’ve worked on: using AI to support enterprise sales pipelines.

TIMETOACT
Technologie
Headerbild zu IBM Watson Assistant
Technologie

IBM Watson Assistant

Watson Assistant identifies intention in requests that can be received via multiple channels. Watson Assistant is trained based on real-live requests and can understand the context and intent of the query based on the acting AI. Extensive search queries are routed to Watson Discovery and seamlessly embedded into the search result.

Rinat AbdullinRinat AbdullinBlog
Blog

LLM Performance Series: Batching

Beginning with the September Trustbit LLM Benchmarks, we are now giving particular focus to a range of enterprise workloads. These encompass the kinds of tasks associated with Large Language Models that are frequently encountered in the context of large-scale business digitalization.

TIMETOACT
Technologie
Headerbild zu IBM Cloud Pak for Data Accelerator
Technologie

IBM Cloud Pak for Data Accelerator

For a quick start in certain use cases, specifically for certain business areas or industries, IBM offers so-called accelerators based on the "Cloud Pak for Data" solution, which serve as a template for project development and can thus significantly accelerate the implementation of these use cases. The platform itself provides all the necessary functions for all types of analytics projects, and the accelerators provide the respective content.

Aqeel AlazreeBlog
Blog

Part 3: How to Analyze a Database File with GPT-3.5

In this blog, we'll explore the proper usage of data analysis with ChatGPT and how you can analyze and visualize data from a SQLite database to help you make the most of your data.

Blog
Blog

SAM Wins First Prize at AIM Hackathon

The winning team of the AIM Hackathon, nexus. Group AI, developed SAM, an AI-powered ESG reporting platform designed to help companies streamline their sustainability compliance.

Aqeel AlazreeBlog
Blog

Part 1: Data Analysis with ChatGPT

In this new blog series we will give you an overview of how to analyze and visualize data, create code manually and how to make ChatGPT work effectively. Part 1 deals with the following: In the data-driven era, businesses and organizations are constantly seeking ways to extract meaningful insights from their data. One powerful tool that can facilitate this process is ChatGPT, a state-of-the-art natural language processing model developed by OpenAI. In Part 1 pf this blog, we'll explore the proper usage of data analysis with ChatGPT and how it can help you make the most of your data.

TIMETOACT
Referenz
Referenz

Managed service support for optimal license management

To ensure software compliance, TIMETOACT supports FUNKE Mediengruppe with a SAM Managed Service for Microsoft, Adobe, Oracle and IBM.

Branche
Branche

Artificial Intelligence in Treasury Management

Optimize treasury processes with AI: automated reports, forecasts, and risk management.

Workshop
Workshop

AI Workshops for Companies

Whether it's the basics of AI, prompt engineering, or potential scouting: our diverse AI workshop offerings provide the right content for every need.

Rinat AbdullinRinat AbdullinBlog
Blog

Strategic Impact of Large Language Models

This blog discusses the rapid advancements in large language models, particularly highlighting the impact of OpenAI's GPT models.

Felix KrauseBlog
Blog

License Plate Detection for Precise Car Distance Estimation

When it comes to advanced driver-assistance systems or self-driving cars, one needs to find a way of estimating the distance to other vehicles on the road.

Matus ZilinskyBlog
Blog

Creating a Social Media Posts Generator Website with ChatGPT

Using the GPT-3-turbo and DALL-E models in Node.js to create a social post generator for a fictional product can be really helpful. The author uses ChatGPT to create an API that utilizes the openai library for Node.js., a Vue component with an input for the title and message of the post. This article provides step-by-step instructions for setting up the project and includes links to the code repository.

Aqeel AlazreeBlog
Blog

Part 4: Save Time and Analyze the Database File

ChatGPT-4 enables you to analyze database contents with just two simple steps (copy and paste), facilitating well-informed decision-making.

Aqeel AlazreeBlog
Blog

Database Analysis Report

This report comprehensively analyzes the auto parts sales database. The primary focus is understanding sales trends, identifying high-performing products, Analyzing the most profitable products for the upcoming quarter, and evaluating inventory management efficiency.

Referenz
Referenz

Automated Planning of Transport Routes

Efficient transport route planning through automation and seamless integration.

TIMETOACT
Referenz
Referenz

Flexibility in the data evaluation of a theme park

With the support of TIMETOACT, an theme park in Germany has been using TM1 for many years in different areas of the company to carry out reporting, analysis and planning processes easily and flexibly.

Service
Service

Cloud Transformation & Container Technologies

Public, private or hybrid? We can help you develop your cloud strategy so you can take full advantage of the technology.

Rinat AbdullinRinat AbdullinBlog
Blog

Innovation Incubator Round 1

Team experiments with new technologies and collaborative problem-solving: This was our first round of the Innovation Incubator.

Service
Service

Application Integration & Process Automation

Digitizing and improving business processes and responding agilely to change – more and more companies are facing these kind of challenges. This makes it all the more important to take new business opportunities through integrated and optimized processes based on intelligent, digitally networked systems.

Rinat AbdullinRinat AbdullinBlog
Blog

Machine Learning Pipelines

In this first part, we explain the basics of machine learning pipelines and showcase what they could look like in simple form. Learn about the differences between software development and machine learning as well as which common problems you can tackle with them.

TIMETOACT
Service
Teaserbild zu Data Integration Service und Consulting
Service

Data Integration, ETL and Data Virtualization

While the term "ETL" (Extract - Transform - Load / or ELT) usually described the classic batch-driven process, today the term "Data Integration" extends to all methods of integration: whether batch, real-time, inside or outside a database, or between any systems.

Christian FolieBlog
Blog

Designing and Running a Workshop series: The board

In this part, we discuss the basic design of the Miro board, which will aid in conducting the workshops.

Nina DemuthBlog
Blog

From the idea to the product: The genesis of Skwill

We strongly believe in the benefits of continuous learning at work; this has led us to developing products that we also enjoy using ourselves. Meet Skwill.

Aqeel AlazreeBlog
Blog

Part 2: Data Analysis with powerful Python

Analyzing and visualizing data from a SQLite database in Python can be a powerful way to gain insights and present your findings. In Part 2 of this blog series, we will walk you through the steps to retrieve data from a SQLite database file named gold.db and display it in the form of a chart using Python. We'll use some essential tools and libraries for this task.

Ian RussellIan RussellBlog
Blog

Introduction to Web Programming in F# with Giraffe – Part 2

In this series we are investigating web programming with Giraffe and the Giraffe View Engine plus a few other useful F# libraries.

Ian RussellIan RussellBlog
Blog

Introduction to Partial Function Application in F#

Partial Function Application is one of the core functional programming concepts that everyone should understand as it is widely used in most F# codebases.In this post I will introduce you to the grace and power of partial application. We will start with tupled arguments that most devs will recognise and then move onto curried arguments that allow us to use partial application.

TIMETOACT
Service
Headerbild zu Data Governance Consulting
Service

Data Governance

Data Governance describes all processes that aim to ensure the traceability, quality and protection of data. The need for documentation and traceability increases exponentially as more and more data from different sources is used for decision-making and as a result of the technical possibilities of integration in Data Warehouses or Data Lakes.

Daniel WellerBlog
Blog

Revolutionizing the Logistics Industry

As the logistics industry becomes increasingly complex, businesses need innovative solutions to manage the challenges of supply chain management, trucking, and delivery. With competitors investing in cutting-edge research and development, it is vital for companies to stay ahead of the curve and embrace the latest technologies to remain competitive. That is why we introduce the TIMETOACT Logistics Simulator Framework, a revolutionary tool for creating a digital twin of your logistics operation.

TIMETOACT
Technologie
Headerbild zu Microsoft Azure
Technologie

Microsoft Azure

Azure is the cloud offering from Microsoft. Numerous services are provided in Azure, not only for analytical requirements. Particularly worth mentioning from an analytical perspective are services for data storage (relational, NoSQL and in-memory / with Microsoft or OpenSource technology), Azure Data Factory for data integration, numerous services including AI and, of course, services for BI, such as Power BI or Analysis Services.

TIMETOACT
Technologie
Headerbild zu IBM DataStage
Technologie

IBM InfoSphere Information Server

IBM Information Server is a central platform for enterprise-wide information integration. With IBM Information Server, business information can be extracted, consolidated and merged from a wide variety of sources.

TIMETOACT
Technologie
Headerbild Talend Data Integration
Technologie

Talend Data Integration

Talend Data Integration offers a highly scalable architecture for almost any application and any data source - with well over 900 connectors from cloud solutions like Salesforce to classic on-premises systems.

TIMETOACT
Technologie
Headerbild IBM Cloud Pak for Data
Technologie

IBM Cloud Pak for Data

The Cloud Pak for Data acts as a central, modular platform for analytical use cases. It integrates functions for the physical and virtual integration of data into a central data pool - a data lake or a data warehouse, a comprehensive data catalogue and numerous possibilities for (AI) analysis up to the operational use of the same.

Laura GaetanoBlog
Blog

5 lessons from running a (remote) design systems book club

Last year I gifted a design systems book I had been reading to a friend and she suggested starting a mini book club so that she’d have some accountability to finish reading the book. I took her up on the offer and so in late spring, our design systems book club was born. But how can you make the meetings fun and engaging even though you're physically separated? Here are a couple of things I learned from running my very first remote book club with my friend!

Felix KrauseBlog
Blog

Creating a Cross-Domain Capable ML Pipeline

As classifying images into categories is a ubiquitous task occurring in various domains, a need for a machine learning pipeline which can accommodate for new categories is easy to justify. In particular, common general requirements are to filter out low-quality (blurred, low contrast etc.) images, and to speed up the learning of new categories if image quality is sufficient. In this blog post we compare several image classification models from the transfer learning perspective.

TIMETOACT
Service
Navigationsbild zu Data Science
Service

Data Science, Artificial Intelligence and Machine Learning

For some time, Data Science has been considered the supreme discipline in the recognition of valuable information in large amounts of data. It promises to extract hidden, valuable information from data of any structure.

TIMETOACT
Technologie
Headerbild IBM Cloud Pak for Data System
Technologie

IBM Cloud Pak for Data System

With the Cloud Pak for Data System (CP4DS), IBM provides the optimal hardware for the use of all Cloud Pak for Data functions industry-wide and thus continues the series of ready-configured systems ("Appliance" or "Hyperconverged System").

TIMETOACT
Technologie
Headerbild zu Talend Data Fabric
Technologie

Talend Data Fabric

The ultimate solution for your data needs – Talend Data Fabric includes everything your (Data Integration) heart desires and serves all integration needs relating to applications, systems and data.

TIMETOACT
Service
Headerbild zu Dashboards und Reports
Service

Dashboards & Reports

The discipline of Business Intelligence provides the necessary means for accessing data. In addition, various methods have developed that help to transport information to the end user through various technologies.

TIMETOACT
Technologie
Headerbild zu Talend Real-Time Big Data Platform
Technologie

Talend Real-Time Big Data Platform

Talend Big Data Platform simplifies complex integrations so you can successfully use Big Data with Apache Spark, Databricks, AWS, IBM Watson, Microsoft Azure, Snowflake, Google Cloud Platform and NoSQL.

TIMETOACT
Technologie
Headerbild für IBM SPSS
Technologie

IBM SPSS Modeler

IBM SPSS Modeler is a tool that can be used to model and execute tasks, for example in the field of Data Science and Data Mining, via a graphical user interface.

TIMETOACT
Service
Headerbild zu Digitale Planung, Forecasting und Optimierung
Service

Demand Planning, Forecasting and Optimization

After the data has been prepared and visualized via dashboards and reports, the task is now to use the data obtained accordingly. Digital planning, forecasting and optimization describes all the capabilities of an IT-supported solution in the company to support users in digital analysis and planning.

TIMETOACT
Technologie
Headerbild zu IBM DB2
Technologie

IBM Db2

The IBM Db2database has been established on the market for many years as the leading data warehouse database in addition to its classic use in operations.

TIMETOACT
Service
Headerbild zu Operationalisierung von Data Science (MLOps)
Service

Operationalization of Data Science (MLOps)

Data and Artificial Intelligence (AI) can support almost any business process based on facts. Many companies are in the phase of professional assessment of the algorithms and technical testing of the respective technologies.