Three partners from the Universitat Politècnica de València discuss their particular roles in the DECODER H2020 project. We speak with Tanja Ernestina Vos, Borja Davó Gelardo and Nacho Mansanet Benavent about the project goals and progress as the consortium reaches completion of their efforts.
The DECODER project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement number 824231.
This is a Technikon podcast.
The field of software development is trying to keep up with the demand for new code. And if there was ever a time to streamline the process, it's now. From critical infrastructure systems to your smartwatch. The software makes everything work. With embedded software everywhere, it has become a necessity to implement new methods for engineers to produce good code at a rapid pace. How is this done? Well, let's find out. I'm Peter Balint from Technikon, and today we take one last look at the DECODER project. DECODER has built an integrated development environment as an open source solution. It will improve the efficiency of development, testing, verification and validation, providing a long overdue solution to an efficient software development environment which can grow with our needs. The Polytechnic University of Valencia is a partner in DECODER, and today we speak with three of their contributors. We start with Associate Professor Tanya Vos . She gives an overview of the project. Then we speak with Nacho Mansanet , a computer engineer. And finally, we will hear from Borja Davó Gelardo , a researcher and computer engineer. Let's have a listen. Welcome to our podcast, Tanya.
Yes, very nice to be here.
So we heard a bit about DECODER in the intro, but in your own words, tell us what the DECODER project is all about.
Well, the main goal of the go to project is to make effective tools that will give software developers enough knowledge to support them during the writing of their software. So, so one of the problems that we see is that software engineers do not have enough information to make smart decisions so so they can waste a lot of time. They can make wrong decisions, and this can all be fixed if they have the right information. So the goal of DECODER is that we will make tools for that where all this information is there for them such that they have to project intelligence they need at hand to make smart decisions.
Software has been under development for decades, what is the sudden need for a change in the way that it's done?
Well, I don't think that it's something that just popped out of the blue right now, no, I think it is because software projects are getting bigger and bigger, and it seems that work on software projects are also getting bigger and they're more distributed. So people are anywhere in the world working on the same software system. So the the problems that you see by lacking the right information to do make good decisions is getting worse when the software complexity is increasing. So so this is not something that's all of a sudden became a problem. It has been a problem for a long time, and it's getting bigger because of the complexity of software.
This makes sense. Thanks for clarifying. Tanya, what do you think the impact of DECODER will be and I mean, specifically in the software development community of the future?
Well, the impact will be that the software engineers can be more productive and they will waste less time on solving problems they have because they made wrong decisions based on the wrong information. So we will give them intelligence on on the project and on what other people are doing and the software they're developing. We will give them, we will give this information to them, to the tools that we will develop so they can make better decisions, faster and then their productivity will increase significantly.
Some people may say I really don't use software, so this doesn't affect me, but this may not necessarily be true. I mean, embedded software is everywhere, especially with the Internet of Things. Can you expand on that notion?
Yes, that's correct. Yeah. Like what you said? No. Software is everywhere. It's like, what? What is this famous phrase that is being used that software is eating the world now? Or or so is the skin of our society. It is nowadays everywhere. So there's a lot more software being developed. It's getting connected, it's getting integrated, systems are getting bigger and there's all kinds of systems. So. But what you actually also see is that many companies are turning into software companies like, well, you're talking about embedded software now. If you look about if you look at auto motion companies, then if you see they're not really car companies anymore, but they are actually software developing companies because many of the components we have in our cars are being controlled by software.
Yeah, agreed. Well, thank you so much for your broad overview of DECODER.
Oh, you're more than welcome.
Now we move on to our next guest, Nacho, and he will tell us about the architecture in DECODER. Welcome, Nacho. Thank you for coming in today.
You're welcome.
You may be the best person in this project to tell us how things are constructed. So what can you say about that aspect of DECODER?
Yeah. Thank you, Peter. Yeah. For DECODER , we have designed a software architecture based on superimposed layers. In fact, all the DECODER software is around our knowledge base that we have called persistent knowledge monitor or PKM. This PKM is implemented using a non relational database. In this case, we use MongoDB since it gives us the necessary capabilities to store the artifacts that participate in the entire software development lifecycle. So in this knowledge base, we store all the artifacts from sourcecode to models or documents or formal requirements specifications or whatever, which will be later used by the different tools on the DECODER toolset. These artifacts are stored using the design language and for the validation. We have implemented some domain specific languages that have been specified using the data on a schema notation, and we use it to validate the correctness and completeness of the content that we're store in the PKM. The PKM layer is located at the bottom of the pile and the rest of the architecture is built upon it. Explaining a little bit all the layers upon the PKM layer, what we find is the data access layer to enable the access to the PKM. We have implemented and API based on REST services. That allows us the typical operations that are create, read, update and delete all the knowledge that we store in the PKM. Furthermore, in this layer during the operations of create an update, we validate the the structure and the content of the artifacts that are stored in the PKM, using that analogy, I said before JSON schemas and validation based on the domain specific languages. About the API layer, what we find are the tools these tools are what we call the toolset or the toolchain of the code. We have integrated all kinds of tools to perform the different tasks from specification development to validation or test the software. What we have done is a tool access layer based on REST services, and we have specified these REST services using the open API annotation. This is important because one of our premises is the extensibility of the of the DECODER platform and this open area specification, open API specifications will be used to build in an automatic way the graphical user interfaces to invoke the tools from the frontend in order to attend these extensibility. What we have done is to put between the frontend layer and the tools layer another intermediate layer that we called the process engine layer that will act as an intermediary between the front end and the tools, and will implement the methodology for the software development. This is the the architecture in a very summarized way, but I like to note that as we have used a services or microservices approach to develop this architecture, this has enabled us to deploy all these architecture using the Docker technology. The PKM, PKM API the tools, the process engine , or maybe almost all the pieces that that conforms the DECODER platform are deployed in it's own containers . So this is a this is very important because we had a very basic requirement that was the availability of the of the solution. So using this approach, what we what we ensure is that if any of of the the containers or the tools or the processes in general or whatever fails in any moment, the rest of the platform will work properly. Waiting for it to self-heal or or whatever. And this is. Well, this is in a very summarized way that the architect ure we have implemented for DECODER .
Well, let's fast forward a little bit and look at the benefits once DECODER is in use by developers. What do you see?
Well, using the platform proposed in DECODER. What the user will find is, in one place, everything they need to articulate or to perform a software development process. From the initial stages of capturing and specifying the requirements to the final stages of a of a software development, that was the validation and the and the test of the or the project also using free software tools. Almost all of them integrated in a web platform.
Well, it sounds like DECODER will certainly increase productivity. No question about that. And thanks so much for your explanation.
You're welcome.
Next up is Borja , thank you for coming in today.
Thank you to Peter.
You are integrating TESTAR into the decoder platform, and Testar is an open source software testing tool that has been developed over the last few years. What challenges did you have with this integration?
Yeah. So TESTAR is a scriptless testing tool that basically works at the graphical user interface level, but we need to understand why this is this way. Well, the graphical user interfaces are found in most modern applications, and testing these these interfaces at this level means basically testing from the user's perspective. So if we chose to do a manual testing, it could be really expensive and laborious. There are some intents to automate these graphical user interface testing using scripts, but these usually fail because of the high maintenance costs of these scripts, because when you have scripts, you have to maintain, you have to work on it. So in this case, TESTAR is a scriptless approach. So basically, to point out the main features of TESTAR, it is about random testing. So this means out of the box robustness tests clicking everywhere. Also, this is a scriptless tool the thing that avoids a lot of maintenance costs and also implicit oracles, and this is used to find bugs that are related to the requirements of the application that are non-functional.
Can you briefly say how TESTAR works?
Well, first of all, it detects all the available widgets in a state, and after that TESTAR derives all possible actions associated to those widgets. Then TESTAR will select an action to execute and will execute that action. And after that, TESTAR will await the graphical user interface to update and will take the records to check if the state is, for example, containing a suspicious title or maybe the system has crashed or has phased. So taking this into account, the challenges for TESTAR, we could say, for example, that the graphical user interface are usually large and complex. So there could be a lot of challenges for the stability. For example, one of those could be the detection of the available widgets in a state. This means well detect all the items that are considered widgets in an application. Also, to make sure that, OK, the widgets are detected. But are they in the correct placement? And do they have the correct size. This is another talent. Also, their derivation of the correct action that corresponds to to each of these widgets is another challenge because it may depend of the application. There could be applications where it could be different the detection of widgets or actions. And also another challenge could be to establish the correct oracles to detect unwanted situations in the tested systems. And this is because, as I said before, there are no applications that are the same. So it is it is important to establish different oracles that are specific for each ones.
And how do you know that DECODER has been a successful endeavor? What metrics do you use to say yes, this has been successful?
Well, in the DECODER project, we have the opportunity to execute TESTAR to test other DECODER applications, and an application that is tested is commonly named system under test or SUT. Different use cases could represent challenges for TESTAR because if we take one use case that is based on mobile applications it could represent another way of acting than desktop applications, for example, the different configurations that this type needs for all of these cases are some some things to to keep in mind when when using TESTAR in different use cases. Also to measure the exploration with which TESTAR has tested an app, we often perform coverage analysis and this coverage analysis is done to measure basically the parts covered of the system that is being tested and also the instructions covered. Yeah, so these are the metrics we use.
And we have to stop there Borja , thank you for your insights into DECODER.
Thank you, Peter,
And thanks for listening to our podcast. For more information about DECODER. Go to decoder-project.eu . The DECODER project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement number 824231 .