{"id":15217,"date":"2026-05-04T09:00:00","date_gmt":"2026-05-04T07:00:00","guid":{"rendered":"https:\/\/www.datamondial.com\/?p=15217"},"modified":"2026-04-29T11:55:29","modified_gmt":"2026-04-29T09:55:29","slug":"dirty-legacy-data-unexpected-ai-bottleneck","status":"publish","type":"post","link":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/","title":{"rendered":"Dirty Legacy Data: The Unexpected Bottleneck in Your Logistics AI Project"},"content":{"rendered":"\n\n<h2>Why algorithms fail on historical freight data<\/h2>\n<p>Machine learning models base their decision-making power on reliable logic and pattern recognition. When the input consists of decades of accumulated freight data and routing profiles, this data rarely reflects a single, consistent standard. Effective <a href=\"https:\/\/www.datamondial.com\/en\/services\/data-validation-for-ocr-ai-machine-learning\/\">data validation for OCR, AI, and Machine Learning &#8211; DataMondial<\/a> is essential because human entry errors, typos, and fluctuating terminology can derail algorithms during the initial training phase. According to the fundamental analysis in <a href=\"https:\/\/wux.nl\/blog\/schone-gestructureerde-data-is-de-onmisbare-basis-voor-succesvolle-ai\">AI only works with clean, structured data<\/a> by Wux, unstructured source datasets inevitably result in unusable AI predictions. Within logistics back-office environments and customs departments, the complexity goes far beyond simple spelling mistakes. Old customer files are notorious for containing &#8216;stray data&#8217;. These are information fields or notepad entries in databases that once served a specific, temporary purpose within a now-replaced transport management system, but were never systematically labeled or removed. The publication <a href=\"https:\/\/www.computable.nl\/2020\/11\/30\/ai-als-antwoord-op-legacy-data\/\">AI as the answer to legacy data<\/a> by Computable illustrates that connecting new data models to outdated structures merely leads to the automated reproduction of historical bottlenecks [1].<\/p>\n<h3>Shifting validation rules and stray data<\/h3>\n<p>Outdated ERP and customs systems simply lack uniform data. What was a mandatory numeric entry field for a specific declaration in 2012 might have later been merged into or replaced by a broader HS (Harmonized System) code. These shifting validation rules over a span of several years create datasets riddled with gaps. A neural network cannot differentiate between a field that was intentionally left blank due to a process change, and a field that was accidentally skipped by an employee. The result? The AI draws correlations that are entirely invalid from a logistical standpoint.<\/p>\n<h3>Document formats as data silos<\/h3>\n<p>The supply chain relies heavily on modality-specific documentation. An ocean freight Bill of Lading (B\/L) features an entirely different field layout, terminology, and party structure compared to a road transport CMR or an Air Waybill (AWB). Once recorded in legacy archives, these specific formats act as isolated data silos. Without a targeted transformation layer, an algorithm will completely miss the overlap between incoming ocean freight and the subsequent pre- or on-carriage run by road. The system views the streams as disconnected entities because the underlying data has never been standardized.<\/p>\n\n\n<h2>The hidden costs of unprepared AI integrations<\/h2>\n<p>Budget overruns in IT innovations often only surface once the actual data entry begins. The Salesforce article, <a href=\"https:\/\/www.salesforce.com\/nl\/blog\/data-cleaning-how-to\/\">5 ways to clean your data for AI agents<\/a>, cites recent data management research by Fivetran (2024), revealing that data scientists spend an average of 67% of their workday cleaning and formatting data. This structural waste of time diminishes the ROI of a logistics AI project from day one. <\/p>\n<p>The financial impact of dirty data inevitably follows the 1:10:100 rule. Quality assurance at the front door costs one euro, retroactively isolating and fixing a database error costs ten euros, and resolving the mistake costs a hundred euros once the data is live and causing operational damage. The practical consequences within supply chains are severe. When predictive models rely on historical customs delays that lack contextual verification, the software plans unrealistic transit times. Models calculate allocations based on incorrect chargeable weights. This leads to routing delays, triggers unnecessary storage costs, and causes severe capacity constraints at transshipment terminals.<\/p>\n<h2>Triage in back-office data: What do you clean first?<\/h2>\n<p>A functional cleanup process requires strict prioritization. Not every gigabyte of historical data yields enough process improvement to justify the expense of recovery or data migration. Using a fixed decision tree and evaluation matrix, an organization can separate active operational source data from archival records. Here, the primary focus is on identifying and isolating corrupted master data for mandatory manual revision before migrating anything at all. <\/p>\n<p>Below is a highly actionable data retention decision framework:<\/p>\n<table>\n<thead>\n<tr>\n<th align=\"left\">Data Category<\/th>\n<th align=\"left\">Risk Profile<\/th>\n<th align=\"left\">Action &amp; Priority<\/th>\n<th align=\"left\">Practical Example (Logistics)<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr>\n<td align=\"left\">Operations \/ Master Data<\/td>\n<td align=\"left\">High<\/td>\n<td align=\"left\">Clean &amp; validate immediately<\/td>\n<td align=\"left\">Current client files, delivery addresses, HS codes<\/td>\n<\/tr>\n<tr>\n<td align=\"left\">Analytical Datasets<\/td>\n<td align=\"left\">Medium<\/td>\n<td align=\"left\">Aggregate by timeframe<\/td>\n<td align=\"left\">Seasonal revenue and volume trends (up to 3 years)<\/td>\n<\/tr>\n<tr>\n<td align=\"left\">Fiscal Compliance<\/td>\n<td align=\"left\">High<\/td>\n<td align=\"left\">Clean &amp; store as read-only<\/td>\n<td align=\"left\">Declared customs documents, clearances<\/td>\n<\/tr>\n<tr>\n<td align=\"left\">Outdated Legacy<\/td>\n<td align=\"left\">Low<\/td>\n<td align=\"left\">Raw archiving (no AI)<\/td>\n<td align=\"left\">Transit history older than seven years<\/td>\n<\/tr>\n<\/tbody><\/table>\n<p>Structured search techniques form the foundation here. The tech firm <a href=\"https:\/\/www.my-lex.nl\">MY-LEX describes in The Art of Finding<\/a> how extraction systems can crack open and index unorganized legacy sources. Without this crucial groundwork, any effective triage operation is doomed from the start.<\/p>\n<h3>High-risk versus archive-worthy<\/h3>\n<p>Risk mitigation dictates priority. Errors in current customs data, such as a description deviating from its TARIC code, are classified as high-risk and demand immediate rectification. Discrepancies at this level halt physical freight at international borders. Conversely, specific delivery details for local runs back in 2014 are merely archive-worthy. These files require far too much processing time to be useful for modern planning software; cleaning legacy data for AI in this context costs more than the theoretical optimization value the machine learning would deliver.<\/p>\n<h3>The limits of automated data retrieval<\/h3>\n<p>Modern extraction software hits a brick wall the moment source systems fail to support API (Application Programming Interface) access. The limitations of automated retrieval become glaringly obvious with image-driven archives. Flat PDFs, handwritten weight slips, or scanned clearance documents that bypassed OCR (Optical Character Recognition) offer the computer zero readable or sortable data. For this volume of closed documents, triage won&#8217;t help immediately. These sources force a specific data migration trajectory, where specialized back-office teams or RPA scripts must manually re-key the unstructured visual information and unlock it into workable tables.<\/p>\n\n\n<h2>Human validation to correct automated cleaning tools<\/h2>\n<p>Expecting a script to independently salvage a murky database introduces massive operational risks. Automated tools are incredibly powerful at detecting physical structures: they populate empty fields, correct currency formats, and harmonize date formatting (DD-MM-YYYY instead of MM-DD-YY). What they fundamentally lack, however, is logistics domain knowledge and operational context. <\/p>\n<p>When a script spots an ocean freight shipment weighing 12,000 kilograms with a volume of just 1 cubic meter, it passes technical format validation as long as the numbers reside in the correct fields. Back-office specialists instantly spot such physical impossibilities during spot checks. This insight points toward a robust, hybrid workflow. The automation filters out unnecessary punctuation and duplicate records to maximize scalability, while experienced case handlers monitor data accuracy throughout the process. According to the HSO article on preserving structure, <a href=\"https:\/\/www.hso.com\/nl\/blog\/een-dataplatform-bouwen-dat-klaar-is-voor-ai\">Building an AI-ready data platform<\/a>, strict governance paired with human data oversight during the cleanup phase is the only viable guarantee. Furthermore, this human oversight during the preliminary stages immediately secures the compliance status of the final decisions the AI will eventually make. To tackle this structurally, a careful read through <a href=\"https:\/\/www.datamondial.com\/en\/training-ai-models-safely-eu-compliance-checklist\/\">Safely training AI models: The compliance checklist for data validation within the EU<\/a> is an indispensable asset for modern logistics companies.<\/p>\n<h3>The blind spots of automated tools<\/h3>\n<p>Deviating material specifications perfectly demonstrate the fundamental weakness of machine interpretation. Suppose information regarding dangerous goods (ADR) was strictly typed into a free-text comment field (&#8220;watch out flammable&#8221;) for years as an unwritten shop-floor rule, instead of being recorded in the official hazard class-<\/p>\n\n\n\n<figure class=\"wp-block-image size-large content-amigo-image\"><img decoding=\"async\" src=\"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/2fa33f88-6a79-487d-a674-6ba5c0ed9d9e-section-3.jpg\" alt=\"Hands hovering over an illuminated keyboard working on a spreadsheet, cleaning legacy data for AI in a modern office setup.\" \/><\/figure>\n\n<h2>Sources<\/h2>\n1. <a href=\"https:\/\/www.infosupport.com\/resources\/van-legacy-last-naar-concurrentievoordeel-hoe-je-tot-70-sneller-moderniseert-met-ai\/\" target=\"_blank\" rel=\"noopener noreferrer\">From legacy burden to competitive advantage: how to modernize up to 70% faster with AI<\/a>","protected":false},"excerpt":{"rendered":"<p>Is dirty legacy data sabotaging your logistics AI projects? Discover why algorithms fail on historical freight data and how to clean it for success.<\/p>\n","protected":false},"author":10,"featured_media":15216,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_yoast_wpseo_focuskw":"","_yoast_wpseo_title":"Cleaning Legacy Data for AI: The Logistics Innovation Bottleneck","_yoast_wpseo_metadesc":"Discover why algorithms fail on historical freight data and learn how cleaning legacy data for AI prevents budget overruns and operational supply chain issues.","footnotes":""},"categories":[88,91],"tags":[],"class_list":["post-15217","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog","category-blog-en"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Cleaning Legacy Data for AI: The Logistics Innovation Bottleneck<\/title>\n<meta name=\"description\" content=\"Discover why algorithms fail on historical freight data and learn how cleaning legacy data for AI prevents budget overruns and operational supply chain issues.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Cleaning Legacy Data for AI: The Logistics Innovation Bottleneck\" \/>\n<meta property=\"og:description\" content=\"Discover why algorithms fail on historical freight data and learn how cleaning legacy data for AI prevents budget overruns and operational supply chain issues.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/\" \/>\n<meta property=\"og:site_name\" content=\"DataMondial\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-04T07:00:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1376\" \/>\n\t<meta property=\"og:image:height\" content=\"768\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Ralph van Es\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Ralph van Es\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/\"},\"author\":{\"name\":\"Ralph van Es\",\"@id\":\"https:\/\/www.datamondial.com\/#\/schema\/person\/5438b776538ac7702fbaa3b85ebf463e\"},\"headline\":\"Dirty Legacy Data: The Unexpected Bottleneck in Your Logistics AI Project\",\"datePublished\":\"2026-05-04T07:00:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/\"},\"wordCount\":1195,\"publisher\":{\"@id\":\"https:\/\/www.datamondial.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg\",\"articleSection\":[\"Blog\",\"Blog\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/\",\"url\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/\",\"name\":\"Cleaning Legacy Data for AI: The Logistics Innovation Bottleneck\",\"isPartOf\":{\"@id\":\"https:\/\/www.datamondial.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg\",\"datePublished\":\"2026-05-04T07:00:00+00:00\",\"description\":\"Discover why algorithms fail on historical freight data and learn how cleaning legacy data for AI prevents budget overruns and operational supply chain issues.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#primaryimage\",\"url\":\"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg\",\"contentUrl\":\"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg\",\"width\":1376,\"height\":768,\"caption\":\"Logistics server room with digital shipping containers, illustrating the process of cleaning legacy data for AI in a high-tech environment.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.datamondial.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Dirty Legacy Data: The Unexpected Bottleneck in Your Logistics AI Project\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.datamondial.com\/#website\",\"url\":\"https:\/\/www.datamondial.com\/\",\"name\":\"DataMondial\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.datamondial.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.datamondial.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.datamondial.com\/#organization\",\"name\":\"DataMondial\",\"url\":\"https:\/\/www.datamondial.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.datamondial.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.datamondial.com\/wp-content\/uploads\/2022\/10\/datamondial_onderschrift.svg\",\"contentUrl\":\"https:\/\/www.datamondial.com\/wp-content\/uploads\/2022\/10\/datamondial_onderschrift.svg\",\"width\":431,\"height\":94,\"caption\":\"DataMondial\"},\"image\":{\"@id\":\"https:\/\/www.datamondial.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/datamondial\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.datamondial.com\/#\/schema\/person\/5438b776538ac7702fbaa3b85ebf463e\",\"name\":\"Ralph van Es\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Cleaning Legacy Data for AI: The Logistics Innovation Bottleneck","description":"Discover why algorithms fail on historical freight data and learn how cleaning legacy data for AI prevents budget overruns and operational supply chain issues.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/","og_locale":"en_US","og_type":"article","og_title":"Cleaning Legacy Data for AI: The Logistics Innovation Bottleneck","og_description":"Discover why algorithms fail on historical freight data and learn how cleaning legacy data for AI prevents budget overruns and operational supply chain issues.","og_url":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/","og_site_name":"DataMondial","article_published_time":"2026-05-04T07:00:00+00:00","og_image":[{"width":1376,"height":768,"url":"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg","type":"image\/jpeg"}],"author":"Ralph van Es","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Ralph van Es","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#article","isPartOf":{"@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/"},"author":{"name":"Ralph van Es","@id":"https:\/\/www.datamondial.com\/#\/schema\/person\/5438b776538ac7702fbaa3b85ebf463e"},"headline":"Dirty Legacy Data: The Unexpected Bottleneck in Your Logistics AI Project","datePublished":"2026-05-04T07:00:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/"},"wordCount":1195,"publisher":{"@id":"https:\/\/www.datamondial.com\/#organization"},"image":{"@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#primaryimage"},"thumbnailUrl":"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg","articleSection":["Blog","Blog"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/","url":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/","name":"Cleaning Legacy Data for AI: The Logistics Innovation Bottleneck","isPartOf":{"@id":"https:\/\/www.datamondial.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#primaryimage"},"image":{"@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#primaryimage"},"thumbnailUrl":"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg","datePublished":"2026-05-04T07:00:00+00:00","description":"Discover why algorithms fail on historical freight data and learn how cleaning legacy data for AI prevents budget overruns and operational supply chain issues.","breadcrumb":{"@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#primaryimage","url":"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg","contentUrl":"https:\/\/www.datamondial.com\/wp-content\/uploads\/2026\/04\/dirty-legacy-data-unexpected-ai-bottleneck-en-featured.jpg","width":1376,"height":768,"caption":"Logistics server room with digital shipping containers, illustrating the process of cleaning legacy data for AI in a high-tech environment."},{"@type":"BreadcrumbList","@id":"https:\/\/www.datamondial.com\/en\/dirty-legacy-data-unexpected-ai-bottleneck\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.datamondial.com\/en\/"},{"@type":"ListItem","position":2,"name":"Dirty Legacy Data: The Unexpected Bottleneck in Your Logistics AI Project"}]},{"@type":"WebSite","@id":"https:\/\/www.datamondial.com\/#website","url":"https:\/\/www.datamondial.com\/","name":"DataMondial","description":"","publisher":{"@id":"https:\/\/www.datamondial.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.datamondial.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.datamondial.com\/#organization","name":"DataMondial","url":"https:\/\/www.datamondial.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.datamondial.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.datamondial.com\/wp-content\/uploads\/2022\/10\/datamondial_onderschrift.svg","contentUrl":"https:\/\/www.datamondial.com\/wp-content\/uploads\/2022\/10\/datamondial_onderschrift.svg","width":431,"height":94,"caption":"DataMondial"},"image":{"@id":"https:\/\/www.datamondial.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/datamondial\/"]},{"@type":"Person","@id":"https:\/\/www.datamondial.com\/#\/schema\/person\/5438b776538ac7702fbaa3b85ebf463e","name":"Ralph van Es"}]}},"_links":{"self":[{"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/posts\/15217","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/comments?post=15217"}],"version-history":[{"count":2,"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/posts\/15217\/revisions"}],"predecessor-version":[{"id":15242,"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/posts\/15217\/revisions\/15242"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/media\/15216"}],"wp:attachment":[{"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/media?parent=15217"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/categories?post=15217"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.datamondial.com\/en\/wp-json\/wp\/v2\/tags?post=15217"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}