Is there no further progress in the IT industry without AI? What are the most significant benefits and challenges associated with the development of artificial intelligence? How can you use these technologies responsibly in your work? We discuss this with Bartosz Dziura, Lead Software Architect, and Marcin Caryk, Lead Test Engineer at Codelab.
“We can stay in denial and joke as new tools take their first, often awkward steps in the next technological revolution. But a quarter of a century ago, many also doubted that the Internet and Google would become so indispensable to most of us,” – shares Bartosz Dziura, Lead Software Architect at Codelab, whose work uses AI to develop automotive projects. What does such work look like and what will the future of the IT industry look like with AI technology evolving rapidly?
AI in IT and automotive: challenges and threats
Experts Bartosz Dziura and Marcin Caryk discuss working for Codelab.
You both work for Codelab in the automotive industry. Bartosz as a Lead Software Architect, and Marcin as a Lead Test Engineer. How do you use AI in your work?
Bartosz Dziura: We primarily work in an industry where artificial intelligence, especially machine learning, plays a key role. Machine Learning is heavily used in advanced driver assistance systems, such as autonomous emergency braking, lane assistant, autopilot or driver monitoring (for example assessing levels of drowsiness, fatigue, maintaining focus on the road, etc.).
Although these systems have a very precise perception of the environment together with short term predictive capability and inhumanly fast response times, we are still talking about highly specialized solutions. As a result, they are still far from the general artificial intelligence (AGI). I am referring to the so-called “strong AI”, capable of replacing us in almost all tasks, which many have speculated about in the context of the recent boom in popularity and usefulness of chatbot technologies such as ChatGPT. In our work, the direct use of the latter is severely limited. Especially with respect to usage of generated code or content.
The main causes of the status quo are issues related to the legal risks and lack of copyright related clarity for both the output content and the source materials initially used to train these models.
Marcin Caryk: AI is a hot topic right now, with people everywhere talking about it and trying to use it. My professional activities relate to automotive, so the use of publicly available AI, i.e., in the cloud, is very limited. This is due to legal aspects and the possibility of using certain industry-specific solutions in publicly available models. As a tester, I occasionally assist with ChatGPT. However, I do this to help with general issues unrelated to the topics used at the client’s site, such as when working with Python or as part of my academic activity as a teacher. I try to show students how such a tool can be used wisely. I teach them not to be cognitively passive about the content that such models will generate. In other words, to analyze and verify what artificial intelligence will develop.
What do you see as the biggest challenge in using AI at work?
BD: One of the biggest challenges is verifying the safety and correctness of AI-based systems. Training the model itself, assuming access to a sufficiently large dataset, is relatively simple. The problem arises when, at the end of this process, we end up with a black box solution, the performance of which is challenging to evaluate without tests that are as comprehensive and diverse as the ultimate real-world use cases.
So, some example questions that need to be asked and verified are:
- Does the system recognize and interpret traffic signs, lights and road markings correctly and equally well in, let’s say, the US, China or South America?
- How does it handle difficult conditions such as night, rain, winter in Norway, getting sun-blinded on a highway in Brazil, or driving on a tight road in the countryside in the UK?
- Can it detect detect all usual traffic participants but also any other objects that one might encounter by accident on the road?
- Will the system correctly recognize a STOP sign even when defaced by stickers or graffiti?
- Are there exploitable vulnerabilities in the model potential bad actors might deliberately use to compromise the security?
- Is the system immune to “hallucinations” — false detections of non-existent obstacles that could potentially trigger a tragic emergency breaking on an empty highway?
MC: To put it very broadly, the biggest problem with any AI model is its effectiveness. Thus, the “ability” to respond to various inputs with the “correct” response, i.e., the one we expect to get. The designer of such a model would have to anticipate as many cases as possible, especially extreme cases, prepare such data and train the model so that it would give the correct answers and respond appropriately.
In my career, I have been involved in the development of neural network models for recognizing defects in welded ship parts on X‑ray images using Machine Learning (ML) models. Image processing and preparing input data to train the model is a long and tedious job but a crucial one. As it turned out, where I had the least data the ML model did the worst. These were cracks in welds, which are easy to spot and without special analysis to reject such a weld.
Input data, with variety and quantity, is vital. Data for GPT Chat is already available in abundance. Similarly, all the data about us, the people who build our profiles, reveal what we like, what we order, what sites we browse, etc., already exists; we provide it ourselves every day. For the automotive industry and autonomous drivers, it’s still a niche — we’re still collecting this type of data. Driving through a historic medieval town with narrow streets looks different than driving through modern cities on a three-lane road, so the possible scenarios are rather numerous.
In summary, the challenges AI faces include achieving faster response times, achievable through advanced hardware, developing more efficient models to reduce resource requirements, and addressing the need for large, diverse, and legally compliant datasets for training, validation, and testing.
How do you verify the suitability of a given AI-based solution?
BD: The most commonly used method is a massive data collection campaign around the world, performed using specially adapted test cars. These vehicles meticulously collect all input and output data, both from the existing internal subsystems and precisely calibrated auxiliary sensors like LIDAR or GPS. For more dangerous scenarios that should not be verified on public roads, such as simulating conditions for an emergency braking to avoid hitting a pedestrian, drives are conducted on dedicated test tracks. After all, we are talking about tens or hundreds of thousands of hours of recordings, millions of kilometers driven and petabytes of accumulated data.
It is also worth noting that such verifications must be repeated regularly as the software evolves. For this purpose, software/hardware-in-the-loop techniques are employed to allow regular re-simulation of all collected data. These simulated outputs are then analyzed through multivariate methods to detect any negative correlations, biases (such as a tendency to overestimate distances), regressions, or unexpected edge cases. Given the scale of data, even with supercomputers, a single iteration can take weeks to analyze.
MC: It all depends on the context of what the solution and model are being used for. When it comes to AI in automotive and the use of an autonomous driving system, among other things, this is a complex process. This is because when creating the model, we already have a division of the data into learning, validation, and testing data. Once the AI model or the entire system is created, we have the verification and analysis process. Then, after the model is implemented on the target hardware, there is a verification process in the simulation environment. Among other things, the response time and the correctness of the response with the model data are checked. The data here has been generated and has not yet come from the actual sensors or hardware. Various simulators are often used for this, feeding the processed data to the AI model. This is Software-In-the-Loop testing. Once all the components, including sensors and cameras, are put together, the Hardware-In-the-Loop testing process takes place.
This is the process of performing integration testing, where the cooperation of all system components is checked using performance and optimization tests.
In addition, in many cases, other AI models are already being used, such as ML for automatic test case generation and for intelligent test data generation. Algorithms are also used for anomaly detection and error isolation, especially data from various types of sensors. Then, everything is tested in a test model of the vehicle where data is collected, and the driver has complete control of the vehicle. In all this, the key is to divide the testing process into critical function testing, regression, and cyclic verification. If we are talking about testing language models such as ChatGPT, the testing effort is smaller. The big input for testing comes from the end users during its normal use.
And how do you ensure the quality of the work done by AI?
BD: By constantly comparing the results obtained with independent sensors and other alternative sources of “ground truth.” In addition, we check the consistency of outputs within the system itself, across successive software versions, and in the case of multiple simulations on the same data, we even expect them to be deterministic. Good systems plan in advance for multiple redundant, technologically independent sources of information, requiring mutual consistency of results for further operation.
In each iteration, we continuously refine our testing and analysis methodology to detect and address all potential problems in advance. I have been involved with ADAS systems verification for many years. While it can be demanding — often forcing one to question project requirements, especially in the context of budget and time pressures — it also gives me an incredible sense of calling. After all, this work has a tangible impact on the lives and safety of all people, both those inside and outside of the car. If such prospect and responsibility does not motivate to perform and rise to the peak of your skills, it is hard to imagine what else would.
MC: First, by constantly testing AI solutions with real data as well as simulation data. The key here is to have cases with edge data, those that have extreme values, so either very small or too large, generally out of range. The verifier must have a so-called “critical eye” as well as an inquisitive personality and always ask the question, “What would happen if…?”.
We certainly know by now that AI solutions are not the answer to our ignorance but rather a time-saving tool to assist engineers.
For example, the use of ChatGPT requires verification of the information generated by the AI and, thus, the model’s knowledge of the queried issue. This is exactly the same as when AI generates graphics. If a generated image of a person has three hands and this was not the intended purpose, then we know that the picture is generated with an error because most humans have only two hands. The same is true with any AI model; we need to know exactly what we want to get and validate the model in this regard.In your opinion, from now on, will AI always accompany software development companies?
BD: In our industry, it is now actually impossible for a new car to be authorized to use in the European Union without ADAS systems. The number and quality of the mandatory functions of these systems are constantly increasing. With tight emission control and similar performance of most electric engines, software quality and autonomous functions are becoming key distinguishing competitive elements of perceived value beyond raw price of the vehicle itself.
I dare say that AI will soon become an integral part of our lives — both professional and private.
We can stay in denial and joke as new tools take their first, often awkward steps in the next technological revolution. But a quarter of a century ago, many also doubted that the Internet and Google would become so indispensable to most of us. Case in point: thanks to increasing sophistication of virtual bots, we constantly bombarded with unsolicited and difficult to block contacts, offers, and spam via phone, text messages or emails. In this bot/AI arms race each of us will soon require a private AI assistant / a virtual secretary to shield ourselves and effectively filter incoming data. It will decide which information should reach us and what should be politely but firmly rejected or ignored altogether.
MC: I believe that AI will be an inseparable “friend” to us during software development. Previously, we often worked by searching, for example, Stack Overflow forums, etc., and now we are able to save time thanks to AI. Of course, I believe that everyone must always look critically at the solutions that AI will give them. In short, without knowing and understanding what is going on in the code being developed, using AI can be disastrous. As for our lives, on the other hand, it is already happening. Various AI models are being used more and more frequently and boldly in every field. In the automotive industry as well. It’s only a matter of time when cars will drive themselves, fly and we will be accompanied on a daily basis by solutions such as assistants, whether in virtual or robotic form.
As the automotive industry rapidly evolves with AI advancements, partnering with experts is crucial to stay ahead. Codelab specialises in integrating cutting-edge technology tailored to your automotive needs. Connect with us to explore how our expertise can drive innovation and efficiency in your projects.
Translated version of the original interview in Polish conducted by Just-join-it.