Exploring Agentic AI: The Future of Digital Workforces
At the forefront of the recent AI & Big Data Expo and the Intelligent Automation Conference, discussions revolved around the transformative potential of AI as a digital co-worker. While day one laid the groundwork for practical implementations, the technical sessions delved into the robust infrastructures that will support this evolution.
The Shift to Agentic Systems
A major focus on the exhibition floor was the transition from traditional robotic process automation (RPA) towards more sophisticated “agentic” systems. These advanced tools not only automate tasks but also reason, plan, and execute—moving beyond monotonous, rigid scripts. Amal Makwana from Citi emphasized how agentic AI interacts seamlessly across enterprise workflows, distinguishing it from its predecessors.
Scott Ivell and Ire Adewolu from DeepL articulated this development as a method to close what they termed the “automation gap.” This represents the difference between intent and action in organizational processes. The real value lies in reducing this gap, allowing for efficient execution of tasks without constant human intervention. Brian Halpin from SS&C Blue Prism echoed this sentiment, noting that organizations need to master basic automation before scaling to agentic AI capabilities.
The Need for Governance Frameworks
Implementing agentic systems introduces complexities that necessitate robust governance frameworks. As Steve Holyer from Informatica pointed out, managing non-deterministic outcomes requires stringent oversight. Alongside speakers from MuleSoft and Salesforce, Holyer emphasized the importance of establishing a governance layer that dictates how these systems access and utilize data, thereby mitigating operational risks related to data mishandling.
Data Quality: A Critical Barrier
The performance of any autonomous system is directly tied to the quality of its input data. Andreas Krause from SAP underscored that AI initiatives falter without reliable, interconnected enterprise data. For generative AI to thrive in a corporate setting, it must draw from data both accurate and contextually relevant.
Addressing the technical issues around “hallucinations” in large language models (LLMs), Meni Meller from Gigaspaces advocated for using retrieval-augmented generation (eRAG) together with semantic layers. This combination can help rectify data access problems, enabling models to pull factual insights from enterprise databases in real-time.
Cloud-Native Analytics Challenges
The importance of cloud-native, real-time analytics is crucial, as highlighted by a panel featuring representatives from Equifax, British Gas, and Centrica. For these organizations, a competitive edge hinges on the capability to implement scalable and immediate analytics strategies.
Ensuring Safety in Physical Environments
The integration of AI into physical environments introduces safety risks distinct from traditional software failures. A panel including Edith-Clare Hall from ARIA and Matthew Howard from IEEE RAS explored how embodied AI is being utilized in factories, office spaces, and public venues. Establishing safety protocols before robots engage with humans is paramount.
Perla Maiolino from the Oxford Robotics Institute provided insights into this dilemma through her research on Time-of-Flight (ToF) sensors and electronic skin technologies. These innovations aim to equip robots with both self-awareness and environmental awareness, significantly reducing the risk of accidents in sectors like manufacturing and logistics.
Observability in Software Development
In the realm of software development, observability remains an essential challenge. Yulia Samoylova from Datadog emphasized how AI alters the landscape for development and troubleshooting. As systems achieve greater autonomy, understanding their internal operations and decision-making processes becomes imperative for reliability.
Addressing Infrastructure and Adoption Barriers
Successful AI implementation requires dependable infrastructure supported by a culture that embraces innovation. Julian Skeels from Expereo discussed the necessity of designing networks specifically for AI workloads. This approach involves creating sovereign, secure, and “always-on” network fabrics capable of sustaining high throughput.
The unpredictability of the human element complicates matters. Paul Fermor from IBM Automation cautioned that conventional automation perspectives often overlook the intricacies involved in adopting AI technologies. He referred to this as the “illusion of AI readiness.” Supporting his point, Jena Miller stressed the importance of adopting human-centered strategies to ensure that the workforce feels comfortable with new tools, as a lack of trust can diminish technological returns.
In a proactive stance, Ravi Jay from Sanofi suggested that leaders need to initiate operational and ethical discussions early in the implementation process. A key determinant of success lies in deciding whether to build proprietary solutions or leverage established platforms.
The conversations from day one at this co-located event reveal a clear trajectory toward the utilization of autonomous agents, contingent upon a solid foundation of data.
Want to stay updated on AI and big data from industry leaders? Explore the AI & Big Data Expo, taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx, co-located with other significant technology events, including the Cyber Security & Cloud Expo. Click here for more details.
Inspired by: Source

