Read the Transcript
Amani: Hello, good morning or good afternoon everyone and welcome to today's SYSGO TechCast. My name is Amani Karchoud, Technical Product Marketing Manager at SYSGO and in this episode I'm accompanied with Abdessalem Mami. So, Abdessalem is part of the Innovation Lab team and we will hear more about his technical background and we will discuss topics related to AI and Edge Computing. So, without further ado let's get started. So, welcome Abdessalem, great to have you here!
Abdessalem: Thank you, Amani. I’m really excited to be here with you today.
Amani: Thank you, Abdessalem. So, could you please introduce yourself to our listeners? And maybe tell us a bit about your background and how you became involved in the field of AI?
Abdessalem: So, I’ve been working as an AI intern at SYSGO’s Innovation Lab in Mainz since April. To share a bit of my background: I started my AI journey with a Computer Science degree and am now completing a 3-year engineering degree in AI.
While I’ve always been passionate about computer science, my interest in AI began in 2020 during a university course on expert systems and neural networks. And before that, AI was just an abstract concept — fascinating but distant. The idea of building networks of artificial neurons that process information, recognize patterns, and learn from experience was inspiring to me. It felt like unlocking a world where technology not only solves problems but also imitates some of the most wonderful aspects of human intelligence.
Since that course, I’ve pursued a degree focused on AI, I participated in AI competitions, and co-founded DeepFlow, an AI student community at my university. I think, all of that has brought me to where I am today.
Amani: Yeah, that's a really fascinating journey! So now, speaking of AI evolving to quickly: What are some of the most exciting trends you’re seeing today, and how are you exploring these advancements at SYSGO, of course, especially in the area of generative AI?
Abdessalem: The pace at which AI has evolved over the past decade is nothing short of remarkable. Generative AI, in particular, has captured global attention, not just within the tech community but across various industries and the public sphere. We're now moving from an era where generative AI was something of a novelty, showcased through tools like ChatGPT and DALL-E, to a phase where businesses are starting to harness its potential in more targeted and impactful ways.
It's important to note that while generative AI is powerful, but at its current state, it's not AGI. The initial hype is beginning to settle, and we're gaining a clearer understanding of both the capabilities and limitations of these large language models. Yet, despite this tempered excitement, the adoption of generative AI by businesses is actually accelerating. Companies aren't viewing generative AI as a single tool that will do the whole work entirely with it, but rather as embedded features, I would say, that enhances and augments existing products and workflows. This technology is expanding the boundaries of what machines can achieve, and it's clear that we're no longer just making assumptions about its potential; we're seeing it actively shaping the competitive landscape. If we think about the impact of electricity in enabling the modern world; from the telephone to the computer. AI is playing a similar foundational role for future innovations.
Amani: Yeah, exactly, so you’re saying that AI is fundamentally reshaping our world today.
Abdessalem: It's driving advancements in everything from scientific research to smart cities, laying the groundwork for technological progress that we may not yet fully comprehend, as Mustafa Suleyman said it in his book “The Coming Wave”, and he is the co-founder of DeepMind. So, he said, the future will be defined by two core technologies: AI and synthetic biology.
However, widespread adoption of any technology depends on its accessibility, and generative AI is not different. The costs associated with building and training Large Language Models (LLM)from scratch are high for most organizations. For instance, the latest LLM from Meta is estimated to cost up to 640 million Dollars, that’s a crazy number. And that’s just for training. But what's bridging this gap and accelerating adoption is the incredible work being done by the open-source community. They are pushing forward with tools, frameworks, and models that are democratizing access to really sophisticated AI capabilities. This trend, I think, is particularly exciting because it lowers the barriers to entry, making AI more accessible than ever.
And you know, major AI players are treating this as an iterative process. They’re continuously refining and improving models with each, let’s say, new version and iteration. We're not just talking about enhancements to the underlying architecture, but also about addressing crucial issues like responsible and ethical AI. Bias in AI, for instance, is a significant concern because these models are trained on data that inherently contain human biases. So, we need to ensure that we can refine this data and implement controls over model outputs which is a really critical step in this process. This is also where regulations like the EU AI Act come into play, aiming to build trust and prevent, let’s say, harm as AI becomes more integrated into society. But the real challenge is striking a balance between innovation and safety. And this will be the key in the future, I would say.
Amani: Yeah, absolutely!
Abdessalem: Another fascinating trend is the development of multi-modal generative models. These models can understand and process different types of inputs, let’s say texts, images, and more types of inputs, all at the same. This means AI systems that can perceive and interact with their environment in real-time, they can make decisions, and even execute complex actions. So, this convergence of these trends - generative AI, the open-source movement, model efficiency, ethical considerations, and multi-modal capabilities - will profoundly shape the future of AI.
At SYSGO, as an AI intern, I've been involved in exploring how AI can benefit our work. As the leading European company in embedded operating systems, SYSGO’s commitment to innovation drives us to continually seek out new ways to create value for our customers worldwide.
Amani: You highlighted some critical points about the balance between innovation and ethical considerations in AI. So, in this context, how do you see these technologies impacting real-world applications today and maybe could you share some examples from your work at SYSGO or what you have observed in the industry today?
Abdessalem: Really interesting question. It really touches on a crucial intersection of technology where we see intelligence being brought directly to the devices that interact with our world. When we talk about AI, we're discussing something that has evolved significantly since its early days. John McCarthy, one of the pioneers, defined it as the science and engineering of making machines intelligent machines. And while AI has grown more sophisticated, at its core, it’s still about enabling machines to reason, learn, and make decisions.
Embedded systems, on the other hand, are designed for specific tasks, often operating within larger systems – think of it like the nervous system in a human body, controlling various functions quietly in the background. When AI merges with embedded systems, we witness the birth of what some call Embedded AI or Edge AI, depending on the context. This integration isn’t just a technical upgrade; it’s a shift that brings intelligence closer to the source of data, which means it will enable real-time decision-making where it matters the most. And this is the most interesting part of this integration, I would say.
For example, Autonomous vehicles are perhaps the most compelling example of this integration. These vehicles are equipped with an array of sensors: Cameras, LiDAR, radar, GPS, and all other kind of sensors. And these are all feeding data into embedded systems that can manage the vehicle‘s core functions, like steering, braking, and acceleration. But when you introduce AI to that, something transformative happens. The vehicle evolves from being a machine that follows a pre-programmed set of rules to one that can understand and navigate the complexities of the real world.
For instance, imagine a scenario where a child unexpectedly steps into the street. An AI embedded in the vehicle's system processes the visual data from cameras and immediately recognizes the child’s intent to cross. In milliseconds, the AI can directly give the order to the embedded system to initiate braking to avoid a potential accident. It’s not just following a rule; it’s understanding a situation in a way that feels almost human. And this is only possible because the AI is embedded directly in the system, processing data in real-time, right where it’s needed.
In healthcare, another example, the convergence of AI and embedded systems is revolutionizing patient care. Consider wearable health monitors like smartwatches, which continuously track vital signs such as heart rate, blood pressure, and oxygen levels. These devices are powered by embedded systems that handle the collection and processing of data, but it’s the AI embedded within that, that truly amplifies their capabilities.
For example, an AI-enabled smartwatch can detect irregular heartbeats, such as atrial fibrillation, which might go unnoticed in a typical medical check-up. The AI analyzes the data in real-time, identifies the anomaly, and alerts the wearer to seek medical attention. This kind of early detection can be life-saving. It’s out of question. I n more advanced applications, AI and embedded systems are being used in devices like insulin pumps. These pumps are equipped with AI, can predict fluctuations in blood pressure, blood sugar levels and adjust insulin delivery automatically, minimizing the risks associated with diabetes management.
Amani: Yeah, that’s a powerful example of how AI is making real-time decisions in such critical situations. So now, Abdessalem: let's shift a bit to the space topic, and here I am curious to know how does PikeOS enhance satellite operations, especially with the integration of AI, of course? And how does it improve safety and functionality in this context?
Abdessalem: Satellites are equipped with embedded systems that can manage critical functions like maintaining orbit and communicating with ground stations. SYSGO‘s PikeOS, for example, has been historically trusted as a real-time operating system that powers satellites and allows them to perform reliably even in the harshest conditions of space. The integration of AI – what’s already existing - takes space missions’ capabilities to the next level.
In Earth observation satellites, for example, AI can analyze vast amounts of imaging data to detect changes in the environment, such as deforestation, urban expansion, or natural disasters like wildfires and floods. By processing this data onboard, the satellite can quickly identify significant events and respond in real-time, without waiting for instructions from Earth. This capability is invaluable, especially in disaster response, where every second counts. Additionally, AI in these satellites can also play another role in collision avoidance, predicting and preventing potential collisions with space debris by autonomously adjusting the satellite’s orbit. So, that’s an interesting use case of these kind of integrations of AI and embedded systems.
Amani: Yeah absolutely, that's fascinating! Especially how AI is making such a big impact from healthcare to space.
Abdessalem: This convergence is making technology more responsive, more autonomous, and ultimately, more human. It’s somehow creating something that’s greater than the sum of its parts. And it also informs us about the kinds of systems we can expect to see in the future, and certainly the developments in technology at both the hardware and software levels.
What’s more exciting, I think is, that these innovations aren‘t isolated to one industry. We’re seeing cross-industry adoption, where breakthroughs or ideas in one field like autonomous driving are informing and advancing applications in others, like healthcare and space.
Amani: Can you talk maybe more about these advancements and how they can impact other industries and what are some examples of this crossover effect?
Abdessalem: Whether it’s in the cars we drive or the devices that monitor our health or the satellites that observe our planets: AI brings many advantages to embedded systems offering a wide range of techniques and solutions for applications such as object detection, classification, anomaly detection, threat prevention, and predictive analytics.
For example, in Advanced Driver Assistance Systems (ADAS) in vehicles, traditional systems may rely on basic sensors that provide really limited information. In contrast, AI-powered systems can process complex data from cameras, LiDAR, and other sensors to accurately identify vehicles, pedestrians, cyclists, road signs, and other critical objects on the road. AI algorithms, like Convolutional Neural Networks (CNNs), enable quick differentiation between pedestrians and inanimate objects. This allows vehicles to make split-second decisions really quickly to avoid accidents. This processing supports functions such as Automatic Emergency Braking (AEB) and lane-keeping assistance.
AI allows ADAS to learn and adapt to individual driving patterns and it’s really an interesting use case here. It can enhance the overall driving comfort and safety because the system can now analyze data over time to predict and adjust to the driver’s behavior. AI can also allow ADAS to learn and adapt to individual driving patterns and environments. It can enhance the overall driving comfort and safety because the system can now analyse data over time to predict and adjust to the driver’s behaviour and preferences. Not just that but also for different road conditions.
In urban environments, for example, AI can recognize traffic behavior patterns, such as the timing of traffic lights or the likelihood of pedestrians crossing at certain spots. This can enable the system to optimize vehicle speed and improve fuel efficiency, both at the same time.
Amani: That's super interesting how AI can enhance safety and learns from the driving habits from the drivers. Can you talk a bit more on how it handles security in general, like for example, dealing with cyber threads or other unexpected issues?
Abdessalem: AI can also enhance the security of ADAS by enabling advanced anomaly detection and threat prevention. It can also monitor for unusual driving behavior, potential hazards, or signs of system tampering, and respond to prevent accidents or security breaches. For instance, if erratic driving behavior is detected, the system can alert the driver or even take control to prevent a potential accident.
Furthermore, AI improves ADAS cybersecurity by monitoring the system for signs of cyber attacks and ensuring data integrity. If an AI-driven ADAS detects an attempt to override its sensor inputs or communication channels, it can isolate the affected components, maintain safe driving operations, and notify the driver or manufacturer.
AI can also excel at sensor fusion, combining data from multiple sensors, cameras, radar, LiDAR, and other sensors, to create a comprehensive understanding of the vehicle’s environment. This multisensory data integration can improve decision-making and safety. For example, AI can merge radar and camera data to accurately detect the distance and speed of an approaching vehicle, even in poor weather conditions.
But it also extends to other applications within the vehicle. For example, AI can also be applied to voice-controlled systems, which allow drivers to interact with their vehicle through natural language commands, to control navigation or adjusting climate settings. One other use case of AI here is Reinforcement Learning which can also be used to train autonomous driving algorithms to optimize driving strategies like efficient lane changing by continuously learning from simulations and real-world driving data.
All these AI techniques, whether used independently or in combination, significantly enhance the performance, safety, and functionality of automotive systems, specifically in the ADAS use case for example.
Amani: Very good, AI is really improving safety and security in vehicles and it will lead to even more intelligent and autonomous cars in the future. Thank you so much for these insights until now. And with this we wrap up the first part of our journey into AI. Don’t miss our next episode where we will dive even deeper into this fascinating topic. Until then, see you soon, and take care!