Embedded Systems' Automation following OMG's Model Driven Architecture Vision.

2019 
This paper presents an automated process for end-to-end embedded system design following OMG’s model driven architecture (MDA) vision. It tackles a major challenge in automation: bridging the large semantic gap between the specification and the target code. The shown MDA adaption proposes an uniform and systematic way by splitting the translation process into multiple layers and introducing design platform independent and implementation independent views.In our adaption of MDA, we start with a formalized specification and we end with code (view) generation. The code is then compiled (software) or synthesized (hardware) and finally assembled to the embedded system design. We split the translation process in Model-of-Thing (MoT), Model-of-Design (MoD) and Model-of-View (MoV) layers. MoTs represent the formalized specification, MoDs contain the implementation architecture in a view independent way, and MoVs are implementation dependent and view dependent, i.e., specific details in target language.MoT is translated to MoD, MoD is translated to MoV and MoV is finally used to generate views. The translation between the Models is based on templates, that reflect design and coding blueprints. The final step of the view generation is itself part of generation. The Model MoV and the unparse method are generated from a view language description.The approach has been successfully adapted for generating digital hardware (RTL), properties for verification (SVA), and snippets of firmware that have been successfully synthesized to an FPGA.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    6
    Citations
    NaN
    KQI
    []