The best large language models for digital products in july 2024

The TIMETOACT GROUP LLM Benchmarks highlight the most powerful AI language models for digital product development. Discover which large language models performed best in july 2024.

Based on real benchmark data from our own software products, we evaluated the performance of different LLM models in addressing specific challenges. We examined specific categories such as document processing, CRM integration, external integration, marketing support, and code generation.

July 2024 was a very fruitful month in the world of generative AI. We even saw a few boundaries pushed forward. We have a lot of ground to cover. Let’s get started!

The highlights of the month:

  • Codestral-Mamba 7B - new efficient LLM architecture that achieves surprisingly good results

  • GPT-4o Mini - affordable, lightweight model. The best in its class!

  • Mistral Nemo 12B - decent downloadable model in its class, designed for quantization (compression)

  • Mistral Large 123B v2 - local model that reaches the level of GPT-4 Turbo v3 and Gemini Pro 1.5. It would be the best local model if it weren't for Meta Llama 3.1:

  • Meta Llama 3.1 - a series of models with a permissive license that set new records in our benchmark.

    +++ Update +++

  • Gemini Pro 1.5 v0801 - Google suddenly manages to catch up with OpenAI and makes it into the top 3!

LLM Benchmarks | July 2024

Our benchmarks evaluate the models in terms of their suitability for digital product development. The higher the score, the better.

☁️ - Cloud models with proprietary license
✅ - Open source models that can be run locally without restrictions
🦙 - Local models with Llama2 license

A more detailed explanation of the respective categories can be found below the table.

ModelCodeCrmDocsIntegrateMarketingReasonFinalcostSpeed
GPT-4o ☁️9095100908275891.21 €1.50 rps
GPT-4 Turbo v5/2024-04-09 ☁️869998938845852.45 €0.84 rps
Google Gemini Pro 1.5 0801 ☁️8492901007072851.48 €0.83 rps
GPT-4 v1/0314 ☁️908898528850787.04 €1.31 rps
Claude 3.5 Sonnet ☁️728389788059770.94 €0.09 rps
GPT-4 v2/0613 ☁️908395528850767.04 €2.16 rps
GPT-4 Turbo v4/0125-preview ☁️6697100717545762.45 €0.84 rps
GPT-4o Mini ☁️6387805210067750.04 €1.46 rps
Claude 3 Opus ☁️6988100537659744.69 €0.41 rps
Meta Llama3.1 405B Instruct🦙819392557546742.39 €1.16 rps
GPT-4 Turbo v3/1106-preview ☁️667598528862732.46 €0.68 rps
Mistral Large 123B v2/2407 ☁️687968757571730.86 €1.02 rps
Gemini Pro 1.5 0514 ☁️7396751002562722.01 €0.92 rps
Meta Llama 3.1 70B Instruct f16🦙748990557546721.79 €0.90 rps
Gemini Pro 1.5 0409 ☁️689796637528711.84 €0.59 rps
GPT-3.5 v2/0613 ☁️688173758148710.34 €1.46 rps
GPT-3.5 v3/1106 ☁️687071637859680.24 €2.33 rps
Gemini Pro 1.0 ☁️668683608826680.09 €1.36 rps
GPT-3.5 v4/0125 ☁️638771607847680.12 €1.43 rps
Gemini 1.5 Flash 0514 ☁️3297100567241660.09 €1.77 rps
Cohere Command R+ ☁️638076497059660.83 €1.90 rps
Qwen1.5 32B Chat f16 ⚠️709082567815650.97 €1.66 rps
GPT-3.5-instruct 0914 ☁️479269608832650.35 €2.15 rps
Mistral Nemo 12B v1/2407 ☁️545851977550640.07 €1.22 rps
Mistral 7B OpenChat-3.5 v3 0106 f16 ✅688767528823640.32 €3.39 rps
Meta Llama 3 8B Instruct f16🦙796268498042640.32 €3.33 rps
GPT-3.5 v1/0301 ☁️558269678224630.35 €4.12 rps
Gemma 7B OpenChat-3.5 v3 0106 f16 ✅636784338148630.21 €5.09 rps
Llama 3 8B OpenChat-3.6 20240522 f16 ✅765176458839620.28 €3.79 rps
Mistral 7B OpenChat-3.5 v1 f16 ✅587272498831620.49 €2.20 rps
Mistral 7B OpenChat-3.5 v2 1210 f16 ✅637372458828610.32 €3.40 rps
Starling 7B-alpha f16 ⚠️586667528836610.58 €1.85 rps
Yi 1.5 34B Chat f16 ⚠️477870528628601.18 €1.37 rps
Claude 3 Haiku ☁️646964557533600.08 €0.52 rps
Mixtral 8x22B API (Instruct) ☁️53626294757590.17 €3.12 rps
Meta Llama 3.1 8B Instruct f16🦙577462527434590.45 €2.41 rps
Codestral Mamba 7B v1 ✅536651947117590.30 €2.82 rps
Meta Llama 3.1 70B Instruct b8🦙607675308126585.28 €0.31 rps
Claude 3 Sonnet ☁️724174527830580.95 €0.85 rps
Qwen2 7B Instruct f32 ⚠️508181396629580.46 €2.36 rps
Mistral Large v1/2402 ☁️374970758425572.14 €2.11 rps
Anthropic Claude Instant v1.2 ☁️587565596514562.10 €1.49 rps
Anthropic Claude v2.0 ☁️635255458435552.19 €0.40 rps
Cohere Command R ☁️456657558426550.13 €2.50 rps
Qwen1.5 7B Chat f16 ⚠️568160346036550.29 €3.76 rps
Anthropic Claude v2.1 ☁️295859607533522.25 €0.35 rps
Mistral 7B OpenOrca f16 ☁️545776217826520.41 €2.65 rps
Qwen1.5 14B Chat f16 ⚠️505851498417510.36 €3.03 rps
Meta Llama 3 70B Instruct b8🦙517253298218516.97 €0.23 rps
Mistral 7B Instruct v0.1 f16 ☁️347169446221500.75 €1.43 rps
Llama2 13B Vicuna-1.5 f16🦙503753398238500.99 €1.09 rps
Google Recurrent Gemma 9B IT f16 ⚠️582771455625470.89 €1.21 rps
Codestral 22B v1 ✅384743716613460.30 €4.03 rps
Llama2 13B Hermes f16🦙502430616043451.00 €1.07 rps
Llama2 13B Hermes b8🦙412529616043434.79 €0.22 rps
Mistral Small v2/2402 ☁️33423682568430.18 €3.21 rps
Mistral Small v1/2312 (Mixtral) ☁️10676551568430.19 €2.21 rps
IBM Granite 34B Code Instruct f16 ☁️63493044575411.07 €1.51 rps
Mistral Medium v1/2312 ☁️414327596212410.81 €0.35 rps
Llama2 13B Puffin f16🦙371538485641394.70 €0.23 rps
Mistral Tiny v1/2312 (7B Instruct v0.2) ☁️22475740598390.05 €2.39 rps
Llama2 13B Puffin b8🦙371437465639388.34 €0.13 rps
Meta Llama2 13B chat f16🦙22381745758340.75 €1.44 rps
Meta Llama2 13B chat b8🦙22381545756333.27 €0.33 rps
Mistral 7B Zephyr-β f16 ✅37344644294320.46 €2.34 rps
Meta Llama2 7B chat f16🦙223320425020310.56 €1.93 rps
Mistral 7B Notus-v1 f16 ⚠️10542541484300.75 €1.43 rps
Orca 2 13B f16 ⚠️182232226719300.95 €1.14 rps
Mistral 7B Instruct v0.2 f16 ☁️11305013588290.96 €1.12 rps
Mistral 7B v0.1 f16 ☁️0942425212260.87 €1.23 rps
Google Gemma 2B IT f16 ⚠️332814391520250.30 €3.54 rps
Microsoft Phi 3 Medium 4K Instruct f16 ⚠️5343013478230.82 €1.32 rps
Orca 2 7B f16 ⚠️2202418524200.78 €1.38 rps
Google Gemma 7B IT f16 ⚠️0009620120.99 €1.08 rps
Meta Llama2 7B f16🦙0518328290.95 €1.13 rps
Yi 1.5 9B Chat f16 ⚠️042980881.41 €0.76 rps

The benchmark categories in detail

Here's exactly what we're looking at with the different categories of LLM Leaderboards

How well can the model work with large documents and knowledge bases?

How well does the model support work with product catalogs and marketplaces?

Can the model easily interact with external APIs, services and plugins?

How well can the model support marketing activities, e.g. brainstorming, idea generation and text generation?

How well can the model reason and draw conclusions in a given context?

Can the model generate code and help with programming?

The estimated cost of running the workload. For cloud-based models, we calculate the cost according to the pricing. For on-premises models, we estimate the cost based on GPU requirements for each model, GPU rental cost, model speed, and operational overhead.

The "Speed" column indicates the estimated speed of the model in requests per second (without batching). The higher the speed, the better.


Deeper insights

Codestral Mamba 7B

Mistral AI has made quite a few releases this month, but Codestral Mamba is our favorite. It's not extremely powerful, comparable to models like Llama 3.1 8B or Claude 3 Sonnet. But there are a few nuances:

  • This model is not designed for product or business tasks, it is a coding model. Nevertheless, it competes well with general purpose models.

  • The model doesn’t implement the well-studied transformer architecture, but a Mamba (also known as Linear-Time Sequence Modeling with Selective State Spaces). This architecture is considered to be more resource-efficient and have less constraints on working with large contexts. There were multiple attempts to train a good Mamba model, but this is a first one to achieve good results on our leaderboard.

  • The new model is available for local use and can be obtained directly from HuggingFace. Nvidia TensorRT-LLM already supports this model.

 

GPT-4o Mini

GPT-4o Mini is a new multimodal model from OpenAI. It is similar in class to the GPT-3.5 models, but has better overall results. Its Reason capability is quite large for such a small model. GPT-4o Mini is also the first model to score a perfect 100 in our Marketing category (tests working with language and writing styles).

Given the extremely low cost and good results, the GPT-4o Mini seems perfect for small, focused tasks such as routers and classifiers in LLM-driven products. Large scale data extraction tasks also look good.

Mistral Nemo 12B

Mistral AI has been pushing a lot of effort into bleeding edge R&D, it seems. The Mistral Nemo 12B is another example of this.

On one hand, this model is a bit larger than previous 7B models from Mistral AI. On the other hand it has a few interesting nuances that make up for that.

First of all, the model has better tokeniser under the hood, leading to more efficient token use (fewer tokens needed per input and output).

Secondly, the model was trained together with Nvidia using quantization-aware training. This means that the model is designed from the start to run in a resource-efficient mode. In this case, the model is designed to work well in FP8 mode, which means that the model weights take up a quarter of the usual size in memory (compared to FP32 format). Here is the announcement from Nvidia.

It's a nice coincidence that NVidia GPUs with CUDA Compute 9.0 generation are designed to run FP8 natively (e.g. H100 GPUs for data centers)

If you have the latest GPUs, this Mistral Nemo model can be a good replacement for the earlier 7B models from Mistral AI. Since the model also achieves a high Reason score, there is a chance that fine-tuning will push the model even higher.

You can download this model from Hugging Face or use it via the MistralAI API.

Mistral Large 123B v2

Mistral Large v2 is currently the best model of Mistral in our benchmarks. It is available for download, which means you can run it on your local machines (although a license is required for commercial use).

This model also has a large context of 128 tokens. It claims to support multiple languages, both human and programming languages.

In our benchmark, this model has really good results and an unusually high Reason capability. It is comparable with GPT-4 Turbo v3, Gemini Pro 1.5 and Claude 3 Opus.

The unusual size of this Mistral model could indicate that it was also trained with FP8 Awareness to replace the 70B modes in their lineup (12:7 ~~ 123:80). If that's the case, we could see a general trend where new models will appear in these odd sizes. However, they will only run well on the latest GPUs. This may fragment the LLM landscape and slow down progress.

The lineup of the best Mistral models currently looks like this:

Llama 3.1 Models from Meta

Meta has released an update to its Llama 3.1 series that includes 3 model sizes: 8B, 70B and 405B. You can download all models from HuggingFace and use them locally. Most AI providers also offer support via API.

We tested smaller models locally and used Google Vertex AI for 405B. Google almost didn’t mess up the integration (you may need to fix the line breaks and truncate extra tokens at the beginning of the prompt).

The 8B model is not that interesting - it scores lower than the previous 3.0 version, so we’ll skip it. The other two models are way more interesting.

Meta Llama 3.1 70B has made a massive jump in quality when compared to the previous version. It has reached Gemini Pro 1.5, surpassed GPT-3.5 and reached Mistral Large 123B v2. This is great news because we can achieve the quality of the 123B model with a smaller one.

Note, by the way, that Llama 3.1 models can be quite sensitive to quantization (compression). For example, if we run a 70B model with an 8bit quantization (over bitsandbytes), the performance and quality will drop drastically:

Meta Llama 3.1 405B Instruct

Meta Llama 3.1 405B Instruct is the last hero of the month. This is the first model that managed to beat the GPT-4 Turbo (its weakest version Turbo v3/1106). You can find it in the TOP 10 of our benchmark:

It is a large model. You need 640GB of VRAM (8xH100/A100) just to run it in FP8 with a small batch and context window. The resource requirements alone mean that very few will use this model when compared to 70B/8B variants. There will be less interesting fine-tuning and solutions.

But that's not all that important. The important points are:

  • This is a model that you can download and use locally.

  • It outperforms one of the GPT-4 models

  • It beats Mistral Large 2 in quality while having a more permissive license

  • It reaches the quality of Claude 3 Opus.

This is a small breakthrough. We are sure that smaller models will also reach this level at some point.

Update: Google Gemini 1.5 Pro Experimental v0801

Normally we don’t do benchmark updates after the publication, but this news deserved it. Waiting for one month to report on the new Google Gemini model would be a waste.

This model was released as a public experiment on the first of August (you can find it in the Google AI Studio ). At this point it was also revealed that the model has been running for some time on LMSYS Chatbot Arena, scoring on the top with more 12k votes.

We ran our own benchmark using the Google AI Studio API (the model is not yet available on Vertex AI). The results are really impressive. We are talking about a substantial jump in model capabilities from the first version of Gemini Pro 1.5 in April.

This Google model managed to suddenly overtake almost all GPT-4 models and catch up with the TOP, taking the third place. The scores are quite solid.

Scores could’ve been even better, if Gemini Pro 1.5 paid more attention to following instructions precisely. While extreme attention to the detail isn’t always needed in human interactions, it is essential in products and LLM pipelines deployed at our customers. Top two models from OpenAI still excel in that capability.

Still the news is outstanding, worth the celebration. First of all, we have a new source of innovation that managed to catch up with OpenAI (and we thought that Google was out of the race). Second, companies deeply vested into the Google Cloud will finally get access to top quality large language model within the ecosystem.

And who knows whether Google Gemini 2.0 will manage to increase its modeling capabilities even further. The pace of progress so far has been quite impressive. Just see for yourself:


Local AI and Compliance

We have been tracking this trend for some time now. Local models are becoming increasingly powerful over time and are beating more complex closed-source models.

Local models are quite interesting for a lot of customers, since they seem to address a lot of problems with privacy, confidentiality and compliance. There are less chances to leak private data, if your LLMs run completely on your premises within the security perimeter, right?

Nuances and new regulations: The EU AI Act

However, there are still some nuances. From August 1, 2024, the Artificial Intelligence Act will come into force in the EU. It creates a common regulatory and legal framework for AI in the EU, with various provisions slowly coming into force over the next 3 years.

The EU AI Act regulates not only AI providers (such as OpenAI or MistralAI), but also companies that use AI in a professional context.

Risk-based regulation: What does this mean for your company?

Obviously, not everybody is going to be regulated same way. Regulation is based on the risk levels, and most AI applications are expected to fall into the “Minimal risk” category. However, it is quite easy to step into the higher risk category (for example if AI allows image manipulation, is used in education or recruitment).

Due diligence: more than just local models

In other words, some due diligence will be required for all large companies. The statement "We only use local models" may not be sufficient.

Checklist for compliance with AI regulations

Here's a quick check to see if you're on the right track to ensure compliance for your AI system. Have you documented the answers to these questions and communicated them clearly within your organization?

  • Who are the main users of your system? What are the industries and specific applications of your system? What is the risk classification here?

  • What is the exact name, version, vendor and platform/environment of your AI components?

  • What are the affiliations and partnerships of your AI providers? What are the licensing terms?

  • Where are your systems used geographically? Under which jurisdiction do your AI systems operate?

  • Who is responsible for the system and processes for managing AI risks in your company?

  • Who is responsible for the documentation and communication of your AI system (including things like architecture, components, dependencies, functional requirements and performance standards)?

Your path to AI compliance

If you have concrete answers to these questions, chances are you're already well on your way with AI compliance. This also means that your company will keep an eye on the compliance effort of different options when evaluating LLM-driven solutions.

You can contact us at any time if you have any questions about AI compliance or would like to discuss the topic in more detail.


LLM Benchmarks Archive

Interested in the benchmarks of the past months? You can find all the links on our LLM Benchmarks overview page!

Learn more

Transform your digital projects with the best AI language models!

Discover the transformative power of the best LLM and revolutionize your digital products with AI! Stay future-oriented, increase efficiency and secure a clear competitive advantage. We support you in taking your business value to the next level.

* required

We use the data you send us only for contacting you in connection with your request. You can find all further information in our privacy policy.


Blog
Blog

ChatGPT & Co: LLM Benchmarks for October

Find out which large language models outperformed in the October 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Blog
Blog

ChatGPT & Co: LLM Benchmarks for September

Find out which large language models outperformed in the September 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Blog
Blog

ChatGPT & Co: LLM Benchmarks for November

Find out which large language models outperformed in the November 2024 benchmarks. Stay informed on the latest AI developments and performance metrics.

Martin WarnungMartin WarnungBlog
Blog

Common Mistakes in the Development of AI Assistants

How fortunate that people make mistakes: because we can learn from them and improve. We have closely observed how companies around the world have implemented AI assistants in recent months and have, unfortunately, often seen them fail. We would like to share with you how these failures occurred and what can be learned from them for future projects: So that AI assistants can be implemented more successfully in the future!

Jörg EgretzbergerJörg EgretzbergerBlog
Blog

8 tips for developing AI assistants

AI assistants for businesses are hype, and many teams were already eagerly and enthusiastically working on their implementation. Unfortunately, however, we have seen that many teams we have observed in Europe and the US have failed at the task. Read about our 8 most valuable tips, so that you will succeed.

Rinat AbdullinRinat AbdullinBlog
Blog

Open-sourcing 4 solutions from the Enterprise RAG Challenge

Our RAG competition is a friendly challenge different AI Assistants competed in answering questions based on the annual reports of public companies.

TIMETOACT
Referenz
Referenz

Standardized data management creates basis for reporting

TIMETOACT implements a higher-level data model in a data warehouse for TRUMPF Photonic Components and provides the necessary data integration connection with Talend. With this standardized data management, TRUMPF will receive reports based on reliable data in the future and can also transfer the model to other departments.

TIMETOACT
Technologie
Headerbild zu IBM Cloud Pak for Data Accelerator
Technologie

IBM Cloud Pak for Data Accelerator

For a quick start in certain use cases, specifically for certain business areas or industries, IBM offers so-called accelerators based on the "Cloud Pak for Data" solution, which serve as a template for project development and can thus significantly accelerate the implementation of these use cases. The platform itself provides all the necessary functions for all types of analytics projects, and the accelerators provide the respective content.

Rinat AbdullinRinat AbdullinBlog
Blog

LLM Performance Series: Batching

Beginning with the September Trustbit LLM Benchmarks, we are now giving particular focus to a range of enterprise workloads. These encompass the kinds of tasks associated with Large Language Models that are frequently encountered in the context of large-scale business digitalization.

TIMETOACT
Martin LangeMartin LangeBlog
Checkliste als Symbol für die verschiedenen To Dos im Bereich Lizenzmanagement
Blog

License Management – Everything you need to know

License management is not only relevant in terms of compliance but can also minimize costs and risks. Read more in the article.

Felix KrauseBlog
Blog

AIM Hackathon 2024: Sustainability Meets LLMs

Focusing on impactful AI applications, participants addressed key issues like greenwashing detection, ESG report relevance mapping, and compliance with the European Green Deal.

Blog
Blog

Third Place - AIM Hackathon 2024: The Venturers

ESG reports are often filled with vague statements, obscuring key facts investors need. This team created an AI prototype that analyzes these reports sentence-by-sentence, categorizing content to produce a "relevance map".

Blog
Blog

Second Place - AIM Hackathon 2024: Trustpilot for ESG

The NightWalkers designed a scalable tool that assigns trustworthiness scores based on various types of greenwashing indicators, including unsupported claims and inaccurate data.

Blog
Blog

SAM Wins First Prize at AIM Hackathon

The winning team of the AIM Hackathon, nexus. Group AI, developed SAM, an AI-powered ESG reporting platform designed to help companies streamline their sustainability compliance.

TIMETOACT
Referenz
Referenz

Interactive online portal identifies suitable employees

TIMETOACT digitizes several test procedures for KI.TEST to determine professional intelligence and personality.

TIMETOACT
Referenz
Referenz

Managed service support for optimal license management

To ensure software compliance, TIMETOACT supports FUNKE Mediengruppe with a SAM Managed Service for Microsoft, Adobe, Oracle and IBM.

TIMETOACT
Technologie
Headerbild zu Cloud Pak for Data – Test-Drive
Technologie

IBM Cloud Pak for Data – Test-Drive

By making our comprehensive demo and customer data platform available, we want to offer these customers a way to get a very quick and pragmatic impression of the technology with their data.

TIMETOACT
Technologie
Headerbild zu IBM Watson Knowledge Studio
Technologie

IBM Watson Knowledge Studio

In IBM Watson Knowledge Studio, you train an Artificial Intelligence (AI) on specialist terms of your company or specialist area ("domain knowledge"). In this way, you lay the foundation for automated text processing of extensive, subject-related documents.

TIMETOACT
Technologie
Headerbild zu IBM Watson Discovery
Technologie

IBM Watson Discovery

With Watson Discovery, company data is searched using modern AI to extract information. On the one hand, the AI uses already trained methods to understand texts; on the other hand, it is constantly developed through new training on the company data, its structure and content, thus constantly improving the search results.

TIMETOACT
Technologie
Headerbild zu IBM Watson Assistant
Technologie

IBM Watson Assistant

Watson Assistant identifies intention in requests that can be received via multiple channels. Watson Assistant is trained based on real-live requests and can understand the context and intent of the query based on the acting AI. Extensive search queries are routed to Watson Discovery and seamlessly embedded into the search result.

Workshop
Workshop

AI Workshops for Companies

Whether it's the basics of AI, prompt engineering, or potential scouting: our diverse AI workshop offerings provide the right content for every need.

Rinat AbdullinRinat AbdullinBlog
Blog

The Intersection of AI and Voice Manipulation

The advent of Artificial Intelligence (AI) in text-to-speech (TTS) technologies has revolutionized the way we interact with written content. Natural Readers, standing at the forefront of this innovation, offers a comprehensive suite of features designed to cater to a broad spectrum of needs, from personal leisure to educational support and commercial use. As we delve into the capabilities of Natural Readers, it's crucial to explore both the advantages it brings to the table and the ethical considerations surrounding voice manipulation in TTS technologies.

Aqeel AlazreeBlog
Blog

Database Analysis Report

This report comprehensively analyzes the auto parts sales database. The primary focus is understanding sales trends, identifying high-performing products, Analyzing the most profitable products for the upcoming quarter, and evaluating inventory management efficiency.

Aqeel AlazreeBlog
Blog

Part 4: Save Time and Analyze the Database File

ChatGPT-4 enables you to analyze database contents with just two simple steps (copy and paste), facilitating well-informed decision-making.

Aqeel AlazreeBlog
Blog

Part 3: How to Analyze a Database File with GPT-3.5

In this blog, we'll explore the proper usage of data analysis with ChatGPT and how you can analyze and visualize data from a SQLite database to help you make the most of your data.

Felix KrauseBlog
Blog

License Plate Detection for Precise Car Distance Estimation

When it comes to advanced driver-assistance systems or self-driving cars, one needs to find a way of estimating the distance to other vehicles on the road.

Rinat AbdullinRinat AbdullinBlog
Blog

Let's build an Enterprise AI Assistant

In the previous blog post we have talked about basic principles of building AI assistants. Let’s take them for a spin with a product case that we’ve worked on: using AI to support enterprise sales pipelines.

Rinat AbdullinRinat AbdullinBlog
Blog

So You are Building an AI Assistant?

So you are building an AI assistant for the business? This is a popular topic in the companies these days. Everybody seems to be doing that. While running AI Research in the last months, I have discovered that many companies in the USA and Europe are building some sort of AI assistant these days, mostly around enterprise workflow automation and knowledge bases. There are common patterns in how such projects work most of the time. So let me tell you a story...

Aqeel AlazreeBlog
Blog

Part 1: Data Analysis with ChatGPT

In this new blog series we will give you an overview of how to analyze and visualize data, create code manually and how to make ChatGPT work effectively. Part 1 deals with the following: In the data-driven era, businesses and organizations are constantly seeking ways to extract meaningful insights from their data. One powerful tool that can facilitate this process is ChatGPT, a state-of-the-art natural language processing model developed by OpenAI. In Part 1 pf this blog, we'll explore the proper usage of data analysis with ChatGPT and how it can help you make the most of your data.

Rinat AbdullinRinat AbdullinBlog
Blog

5 Inconvenient Questions when hiring an AI company

This article discusses five questions you should ask when buying an AI. These questions are inconvenient for providers of AI products, but they are necessary to ensure that you are getting the best product for your needs. The article also discusses the importance of testing the AI system on your own data to see how it performs.

Matus ZilinskyBlog
Blog

Creating a Social Media Posts Generator Website with ChatGPT

Using the GPT-3-turbo and DALL-E models in Node.js to create a social post generator for a fictional product can be really helpful. The author uses ChatGPT to create an API that utilizes the openai library for Node.js., a Vue component with an input for the title and message of the post. This article provides step-by-step instructions for setting up the project and includes links to the code repository.

Rinat AbdullinRinat AbdullinBlog
Blog

Strategic Impact of Large Language Models

This blog discusses the rapid advancements in large language models, particularly highlighting the impact of OpenAI's GPT models.

Branche
Branche

Artificial Intelligence in Treasury Management

Optimize treasury processes with AI: automated reports, forecasts, and risk management.

TIMETOACT
Referenz
Referenz

Flexibility in the data evaluation of a theme park

With the support of TIMETOACT, an theme park in Germany has been using TM1 for many years in different areas of the company to carry out reporting, analysis and planning processes easily and flexibly.

Referenz
Referenz

Automated Planning of Transport Routes

Efficient transport route planning through automation and seamless integration.

TIMETOACT
Service
Header Konnzeption individueller Business Intelligence Lösungen
Service

Conception of individual Analytics and Big Data solutions

We determine the best approach to develop an individual solution from the professional, role-specific requirements – suitable for the respective situation!

Felix KrauseBlog
Blog

Creating a Cross-Domain Capable ML Pipeline

As classifying images into categories is a ubiquitous task occurring in various domains, a need for a machine learning pipeline which can accommodate for new categories is easy to justify. In particular, common general requirements are to filter out low-quality (blurred, low contrast etc.) images, and to speed up the learning of new categories if image quality is sufficient. In this blog post we compare several image classification models from the transfer learning perspective.

TIMETOACT
Technologie
Headerbild zu IBM Decision Optimization
Technologie

Decision Optimization

Mathematical algorithms enable fast and efficient improvement of partially contradictory specifications. As an integral part of the IBM Data Science platform "Cloud Pak for Data" or "IBM Watson Studio", decision optimisation has been decisively expanded and embedded in the Data Science process.

TIMETOACT
Service
Navigationsbild zu Business Intelligence
Service

Business Intelligence

Business Intelligence (BI) is a technology-driven process for analyzing data and presenting usable information. On this basis, sound decisions can be made.

TIMETOACT
Service
Navigationsbild zu Data Science
Service

Data Science, Artificial Intelligence and Machine Learning

For some time, Data Science has been considered the supreme discipline in the recognition of valuable information in large amounts of data. It promises to extract hidden, valuable information from data of any structure.

TIMETOACT
Service
Headerbild zu Operationalisierung von Data Science (MLOps)
Service

Operationalization of Data Science (MLOps)

Data and Artificial Intelligence (AI) can support almost any business process based on facts. Many companies are in the phase of professional assessment of the algorithms and technical testing of the respective technologies.

TIMETOACT
Service
Headerbild zu Dashboards und Reports
Service

Dashboards & Reports

The discipline of Business Intelligence provides the necessary means for accessing data. In addition, various methods have developed that help to transport information to the end user through various technologies.

TIMETOACT
Service
Headerbild zu Digitale Planung, Forecasting und Optimierung
Service

Demand Planning, Forecasting and Optimization

After the data has been prepared and visualized via dashboards and reports, the task is now to use the data obtained accordingly. Digital planning, forecasting and optimization describes all the capabilities of an IT-supported solution in the company to support users in digital analysis and planning.

TIMETOACT
Service
Teaserbild zu Data Integration Service und Consulting
Service

Data Integration, ETL and Data Virtualization

While the term "ETL" (Extract - Transform - Load / or ELT) usually described the classic batch-driven process, today the term "Data Integration" extends to all methods of integration: whether batch, real-time, inside or outside a database, or between any systems.

TIMETOACT
Service
Headerbild zu Data Governance Consulting
Service

Data Governance

Data Governance describes all processes that aim to ensure the traceability, quality and protection of data. The need for documentation and traceability increases exponentially as more and more data from different sources is used for decision-making and as a result of the technical possibilities of integration in Data Warehouses or Data Lakes.

TIMETOACT
Technologie
Headerbild IBM Cloud Pak for Data
Technologie

IBM Cloud Pak for Data

The Cloud Pak for Data acts as a central, modular platform for analytical use cases. It integrates functions for the physical and virtual integration of data into a central data pool - a data lake or a data warehouse, a comprehensive data catalogue and numerous possibilities for (AI) analysis up to the operational use of the same.

TIMETOACT
Technologie
Headerbild zu IBM Cloud Pak for Automation
Technologie

IBM Cloud Pak for Automation

The IBM Cloud Pak for Automation helps you automate manual steps on a uniform platform with standardised interfaces. With the Cloud Pak for Business Automation, the entire life cycle of a document or process can be mapped in the company.

TIMETOACT
Technologie
Haderbild zu IBM Cloud Pak for Application
Technologie

IBM Cloud Pak for Application

The IBM Cloud Pak for Application provides a solid foundation for developing, deploying and modernising cloud-native applications. Since agile working is essential for a faster release cycle, ready-made DevOps processes are used, among other things.

TIMETOACT
Technologie
Headerbild IBM Cloud Pak for Data System
Technologie

IBM Cloud Pak for Data System

With the Cloud Pak for Data System (CP4DS), IBM provides the optimal hardware for the use of all Cloud Pak for Data functions industry-wide and thus continues the series of ready-configured systems ("Appliance" or "Hyperconverged System").

TIMETOACT
Technologie
Headerbild Talend Data Integration
Technologie

Talend Data Integration

Talend Data Integration offers a highly scalable architecture for almost any application and any data source - with well over 900 connectors from cloud solutions like Salesforce to classic on-premises systems.

TIMETOACT
Technologie
Headerbild Talend Application Integration
Technologie

Talend Application Integration / ESB

With Talend Application Integration, you create a service-oriented architecture and connect, broker & manage your services and APIs in real time.

TIMETOACT
Technologie
Headerbild zu Talend Real-Time Big Data Platform
Technologie

Talend Real-Time Big Data Platform

Talend Big Data Platform simplifies complex integrations so you can successfully use Big Data with Apache Spark, Databricks, AWS, IBM Watson, Microsoft Azure, Snowflake, Google Cloud Platform and NoSQL.

TIMETOACT
Technologie
Headerbild zu Talend Data Fabric
Technologie

Talend Data Fabric

The ultimate solution for your data needs – Talend Data Fabric includes everything your (Data Integration) heart desires and serves all integration needs relating to applications, systems and data.

TIMETOACT
Technologie
Headerbild zu IBM Watson® Knowledge Catalog
Technologie

IBM Watson® Knowledge Catalog/Information Governance Catalog

Today, "IGC" is a proprietary enterprise cataloging and metadata management solution that is the foundation of all an organization's efforts to comply with rules and regulations or document analytical assets.

TIMETOACT
Technologie
Headerbild zu IBM DataStage
Technologie

IBM InfoSphere Information Server

IBM Information Server is a central platform for enterprise-wide information integration. With IBM Information Server, business information can be extracted, consolidated and merged from a wide variety of sources.

TIMETOACT
Technologie
Headerbild zu IBM DB2
Technologie

IBM Db2

The IBM Db2database has been established on the market for many years as the leading data warehouse database in addition to its classic use in operations.

TIMETOACT
Technologie
Headerbild zu IBM Netezza Performance Server
Technologie

IBM Netezza Performance Server

IBM offers Database technology for specific purposes in the form of appliance solutions. In the Data Warehouse environment, the Netezza technology, later marketed under the name "IBM PureData for Analytics", is particularly well known.

TIMETOACT
Technologie
Headerbild zu IBM Planning Analytics mit Watson
Technologie

IBM Planning Analytics mit Watson

IBM Planning Analytics with Watsons enables the automation of planning, budgeting, forecasting and analysis processes using IBM TM1.

TIMETOACT
Technologie
Headerbild für IBM SPSS
Technologie

IBM SPSS Modeler

IBM SPSS Modeler is a tool that can be used to model and execute tasks, for example in the field of Data Science and Data Mining, via a graphical user interface.