An explanation on how automotive software systems are composed?

I got a question if there is a readily available source that could explain how automotive software systems are composed and how they contrast to more “normal” software systems.
Unfortunately I don’t know of any textbook or white paper that described the key principles of automotive software design end execution. The few books I know of are more process oriented. I enclose some references below, but I still don’t think these will provide enough inside knowledge. Some of them seem old, but I would say there is a huge difference between a current Mercedes S-class and a cheap vehicle (the latter is about 2 generations behind…)
AUTOSAR is of course a fairly common standard for a runtime environment for automotive software components:
Understanding the design of a software system based on source code might be difficult in the automotive domain. Typically 40% (commercial vehicles) to 80% of the software is developed by Tier1 suppliers and are delivered as black box binaries. Therefore there is usually no stakeholder that has complete access to all software in a vehicle, including the OEM.
  • K. Melin, ‘Volvo S80: Electrical system of the future’, Volvo Technology Report, vol. 1, pp. 3–7, 1998. 
  • A. Pretschner, M. Broy, I. H. Krüger, and T. Stauner, ‘Software Engineering for Automotive Systems: A Roadmap’, in Future of Software Engineering, 2007, pp. 55–71.
  • J. Mössinger, ‘Software in Automotive Systems’, Software, IEEE, vol. 27, no. 2, pp. 92–94, 2010.
  • Tsakiris, ‘Managing Software Interfaces of On-Board Automotive Controllers’, IEEE Software, vol. 28, no. 1, pp. 73–76, 2011.
  • J. Schäuffele and T. Zurawka, Automotive Software Engineering: Grundlagen, Prozesse, Methoden und Werkzeuge effizient einsetzen, 5th ed. Springer, 2012. (Mostly process oriented, and I don’t know if German works for you. Previous editions are available in English though)
  • A. Thums and J. Quante, ‘Reengineering embedded automotive software’, in Proceedings of the IEEE International Conference on Software Maintenance, 2012, pp. 493 –502.
  • J. Quante, M. Tarabain, and J. Siegmund, ‘Towards recovering and exploiting domain knowledge from C code: A case study on automotive software’, in IEEE Conference on Software Maintenance, Reengineering and Reverse Engineering, 2014, pp. 383–386.

IoT standards

I attended the Embedded Conference Syd last week, which had both exhibitors and presentation tracks covering Internet-of-things. One thing that struck me is the lack of dominating standards in the area. It seems most proposed solutions and available devices are ether based on proprietary solutions, or relies on one of many various standards controlling how to integrate devices and services.

I assume that over time some standards will emerge and dominate also for IoT, in the same way as http/html for internet hypertext (does anybody remember Gopher and WAIS?). But we are not there yet…

Fortunately, the charging one has been solved now that we've all standardized on mini-USB. Or is it micro-USB? Shit.

Comic from XKCD.


When it comes to documentation I am firm believer in the quote of Antoine de Saint-Exupery:

“A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.”

As I see it there are a few main purposes of producing documentation:

To convey understanding – This shows my liberal view of documentation since I consider drawings on a white board or a napkin to be documentation. But I also consider this to be one of the most important purposes of documenting things. From a designer’s viewpoint these “documents” are vital in going from an inner vision to an operational image that can be shared by others. Common examples of documentation conveying understanding in large organisations are presentations given at various meetings. These documents can be saved for posterity, but the ability to convey understanding is much smaller for those who weren’t there. On the other hand are written documents one of the most efficient means through history to build on knowledge of others which you never had the opportunity to meet.

To formalise agreements – If there is a business agreement between two organisations it is almost inevitable not to have a more formal document detailing the technical content of the agreement, the specification. many organisations, including my own, also use documents to formalise the agreement between internal teams. Your mileage may vary how efficient this is in various organisation.

To preserve information – The human memory is not infallible. Documentation is a great support to minimise the decay that is inevitable when only relying on the human mind. Software is also about maintenance and any body who has been given the responsibility to update legacy code that is undocumented know how difficult it is (this scenario includes the purpose of understanding as well)

I don’t buy “the code is the documentation” at all. The code by itself does not fulfil any of the purposes above.

Who is the customer?

Very often I hear one should have a customer perspective when developing software. Some agile methods propose working close with the customer. But who is the customer? Is it the person paying for the development?

The end-customer is usually not difficult to identify for consumer products, which I have some experience from working in the car business. But even for such a closely related product as heavy trucks it is not obvious who is the customer. Is it the user of the truck (e.g. the driver) or the company buying the truck? These two stakeholders can have quite different wishes and expectations of the end-product, and in worst case these are contradicting.

Looking inside the developing organisation it gets even more unclear. The main work I do as an consultant is with organisations who develops architectures. Who is the customer of an architecture? Is it the end-customer? I think he could not care less if the car has an architecture or not, as long as it has the features and properties he wants. The customers of an architecture are usually developers (including testers) and internal buyers (usually called product managers).

Sven Grahn, former scientific director of the Swedish Space Corporation stated the best “test” of identifying the customer I have heard so far (freely translated from Swedish and maybe so distorted by memory Sven does not recognise it):

The customer is the person, or group, that when you remove them the activity (or product) becomes meaningless.

Is there any use of the V-model?

This blog post originated with some thoughts when I attended the 13th International Conference on Agile Software Development in Malmö Sweden. As usual when I listen to presentations these tend to trigger new liner of thoughts instead of pondering the details of what was said. I guess this is a flaw in how I internalise knowledge from others.

Not surprising, nobody here mentioned the V-model since this is a conference e for those already embracing the gospel of agile development. Too bad, since I think the V-model is one of the most important metaphors regarding system (and software) development there is. But it is also on of the metaphors that have been used in ways which are completely inappropriate.

My “condensed understanding” of the strengths of the V-model is this: It defines a set of activities on the left side of the V with associated artefacts aimed at exploring the problem and detailing the solution down to the actual product (the code!). The important part is that these design activities have corresponding activities of verification & validation on the right hand side of the V, in a one-to-one relationship between the design and V&V activity. The number of levels is individual, but usually about 3-5, including code is reasonable.

Where it goes wrong is when the V is laid out in time as a template for progress of a project. This may make sense if you are designing for manufacturing-heavy products, but is useless for software, the time between understanding the problem (top-left part of the V) and where you verify that your implemented understanding is correct is just way too long to be competitive. There are also drawbacks since when project schedules are tight it tends to squeeze the V&V efforts on the right hand side of the V. This is definitely not news for software developers, and is a key point in agile software development (but probably phrased differently).

It seems to me that in order to optimise the efficiency and speed there is instead a tendency to downplay the activities above the point of the V ( the working code). Big-upfront-design is seen as such a bad thing that a projects ends up in with no design at all, there is a direct jump between formulating the problem (e.g. user stories) and start coding. I believe that in an agile project all activities in a V must be executed in each sprint in order to claim to be “truly agile”. Each sprint means a better understanding the problem, a better design to solve it, and a better implementation in the code. Of course all of these are also verified and validated to the required quality or acceptable level of risk. That does not say that these activates are equally large, or the size of the V is the same in alls sprints, but if you leave out the conscious architecture in the sprints you don’t learn to do better in the next sprints.

You should design the sprint activities in the V to provide the maximum added value with respect to the spent effort (which obviously depends on thousands of context-specific factors), but saying they are irrelevant is only reasonable for the simplest of systems, where you have a definition of the problem and code.

Personally I rather define the result of the activities, i.e.the artefacts, and let the teams decide themselves on how to populate them. I know blog readers may cry about excessive documentation, but what I am talking about is the necessity to externalise information necessary to share between developers, and teams, and to preserve this information over time, something which code never can do.