adesso Blog
11.10.2024 By Marco Becker and Ivan Butron Sossa
Facts instead of gut feeling: Using process mining to understand and optimise processes in any industry based on data
In an age when efficiency and rapid adaptability are crucial, process mining offers significant added value: it enables companies to optimise their business processes in a data-driven way. In this blog post, we explain in more detail how process mining can help – in virtually every implementation, regardless of industry or process.
Read more07.08.2024 By Siver Rajab
Switching from PostgreSQL to Databricks: When does it make sense?
In modern data processing, companies are faced with the challenge of choosing the right database technology for their specific requirements. PostgreSQL and Databricks are two widely used solutions, each with their own strengths. In this blog post, I will highlight the differences between PostgreSQL and Databricks, analyse their respective advantages and disadvantages and give specific use cases that justify a switch to Databricks.
Read more06.06.2024 By Christian Del Monte
Change Data Capture for Data Lakehouse
Change Data Capture (CDC) is a technique that captures all data changes in a data archive, collects them and prepares them for transfer and replication to other systems, either as a batch process or as a stream. This blog post focuses on the application of CDC in data lakehouses using the example of Change Data Feed, a variant of CDC developed by Databricks in the context of delta lake-based data lakehouses.
Read more14.05.2024 By Christian Del Monte
Keep data changes under control with Change Data Capture
In a distributed software system, data changes always pose a challenge. How is it possible to track the change history of data located in one part of the system in order to synchronise connected data stores in other subsystems? Change Data Capture (CDC) offers an answer to this question. I explain what this is all about in this blog post.
Read more15.04.2024 By Patrick Kübler
Why our customers start with product cons reporting when it comes to IIoT
Every day, employees struggle with manual reporting processes that cause high personnel costs, limited opportunities for process optimisation and quality deficiencies. Despite the crucial importance of KPIs for management, manual reporting processes are widespread in production. In this blog post, I explain why companies in the IIoT sector are starting with production reporting.
Read more06.03.2024 By Stefan Klempnauer
Jsonnet as an accelerator for metadata-driven data pipelines
Metadata-driven data pipelines are a game changer for data processing in companies. These pipelines use metadata to dynamically update processes instead of manually revising each step every time a data source changes. As with data pipelines, metadata maintenance can be a bottleneck in the maintenance and further development of a pipeline framework. In this blog post, I use practical examples to show how the Jsonnet template language makes it easier to maintain metadata.
Read more22.02.2024 By Azza Baatout and Marc Mezger
Prefect - Workflow orchestration for AI and data engineering projects
Workflow orchestration and workflow engines are crucial components in modern data processing and software development, especially in the field of artificial intelligence (AI). These technologies make it possible to efficiently manage and coordinate various tasks and processes within complex data pipelines. In this blog post, we present Prefect, an intuitive tool for orchestrating workflows in AI development.
Read more12.02.2024 By Siver Rajab
Snowflake: the advanced data management solution
Snowflake plays a prominent role in shaping the face of the industry in the ever-evolving world of data analytics and data management. This blog post looks at the development of Snowflake and why it is considered a ground-breaking solution for businesses.
Read more26.12.2023 By Mykola Zubok
Data Evolution Chronicles: Tracing the Path from Warehouse to Lakehouse Architecture - Part 1
Before the introduction of computers, companies used account books, inventory lists and intuition to record key figures. The data warehouse for static reports emerged at the end of the 1980s. Digitalisation brought new challenges. Big data overwhelmed traditional data warehouses, which is why companies introduced data lakes, which also had an impact on the architecture and concepts of analytics systems. The first part of this blog post is about their development, the reasons for their emergence and the problems they solved.
Read more