Skip to main content

Application Profiles: What are they and how to model and reuse them properly? A look through the DCAT-AP example.

Large line

Table of Content

Large line

DISCLAIMER: The proposed approach for guidelines on Application Profiles does not impact DCAT-AP 3.0.0

Introduction

The recently published SEMIC Style Guide for Semantic Engineers is designed to offer clear and practical guidelines and recommendations on how to model Core Vocabularies and Application Profiles, aiming to foster semantic interoperability among the EU Member States. Since the release of the Style Guide, numerous stakeholders, including SEMIC working group members, and representatives from various member states, have initiated a review of their existing data specifications to assess their alignment to the Style Guide, and reformulate them, if necessary, in accordance with the provided guidelines and recommendations. Throughout these reviews, and in the course of developing new national data specifications as well as specifications for data spaces, several questions have surfaced that require clarifications and further discussions. The majority of these questions were related to the modelling of Application Profiles (APs).

In this blog article we will describe some of the problems encountered in the modelling of APs, we will reference the relevant parts of the Style Guide, and we will propose some alternative modelling approaches. One of these approaches, which will be suggested for consideration, can elegantly tackle some of the more difficult and subtle issues related to the proper expression of the semantics for the modelling of an AP, while also allowing the generation of consistent set of data specification artefacts to be published, and the “natural” reuse of established vocabularies in the context of APs.

In this article, DCAT-AP and some of its extensions (such as DCAT-AP-HVD and mobilityDCAT-AP) will be utilised to illustrate the problems and proposed solutions. DCAT-AP was selected due to its prominence, being one of the most well known SEMIC specifications. Nevertheless, our observations can easily be applied to any other AP.

Large line

Background

To provide some context for the discussion, we will first quote some definitions from the SEMIC Style Guide for Semantic Engineers describing the main data specification types that the Style Guide refers to.

Core Vocabulary (CV) is a basic, reusable and extensible data specification that captures the fundamental characteristics of an entity in a context-neutral fashion. Its main objective is to provide terms to be reused in the broadest possible context.

In the SEMIC context, a Core Vocabulary encompasses a lightweight ontology, and, optionally, a (permissive) data shape specification, and it is expressed in a condensed, comprehensive data specification document. For more details see: 'SEMIC Style Guide: What is a CV Specification?'

An Application Profile (AP) is a data specification aimed to facilitate the data exchange in a well-defined application context. It re-uses concepts from one or more semantic data specifications, while adding more specificity, by identifying mandatory, recommended, and optional elements, addressing particular application needs, and providing recommendations for controlled vocabularies to be used. 

In the SEMIC context, Application Profiles encompass an ontology, which is largely composed of importing the reused ontologies, complemented with an appropriate data shape specification. 

AP

For more details see: 'SEMIC Style Guide: What is an AP Specification?'

The SEMIC Style Guide, in addition to numerous individual (rule-like) recommendations, also provides some architectural clarifications. These clarifications identify the potential consumers of the data specifications, articulate the need for a conceptual model, and delve into the various data specification artefacts that constitute a semantic data specification, elucidating their interrelations. One of the most important focus areas of the Style Guide is the reuse of terms that come from other (established) vocabularies and application profiles. As ”Reuse” is a central topic of this article, the reader can find all necessary details related to this in the section dedicated to the current Style Guide recommendations.

Throughout this article, we have consistently relied on DCAT-AP, and some of its extensions tailored to address specific use cases, to illustrate and support our statements. Therefore, in the next section we will conduct a brief exploration of the DCAT-AP “ecosystem”.

DCAT is a W3C recommendation. According to the DCAT specification:

“DCAT is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web. [...] DCAT enables a publisher to describe datasets and data services in a catalog using a standard model and vocabulary that facilitates the consumption and aggregation of metadata from multiple catalogs. This can increase the discoverability of datasets and data services.”

As stated in the DCAT-AP specification

“DCAT-AP is a DCAT profile for sharing information about Catalogues containing Datasets and Data Services descriptions in Europe, under maintenance by the SEMIC action, Interoperable Europe. This Application Profile provides a minimal common basis within Europe to share Dataset and Data Services cross-border and cross-domain.”

Both DCAT and DCAT-AP are undergoing a major revision. The last official DCAT-AP release was version 2.1.1, which is an application profile on DCAT version 2 (DCAT 2) from February 2020. However, in March 2023 the DCAT working group published the Working Draft of DCAT version 3 (DCAT 3), and the SEMIC community promptly aligned by updating DCAT-AP to version 3 (DCAT-AP 3.0). The latest editor’s draft of DCAT-AP 3.0 was published in July 2023, and can be found here.

Unless explicitly stated otherwise, the examples referencing DCAT or DCAT-AP will pertain to version 3 of those. In cases when we want to refer to previous versions of DCAT or DCAT-AP, we will explicitly mention their version number (e.g. DCAT 2 or DCAT-AP 2.1.1)

Since DCAT-AP was designed as an application profile that can be used in the broadest possible context for “sharing information about Catalogues containing Datasets and Data Services descriptions” across all European countries, it provides what was considered to be a minimal set of necessary restrictions. Many groups, who were interested in sharing Datasets related to very specific application domains, or within a certain European country, found the need to define application profiles that are more specific than DCAT-AP. As a result, there exist a number of DCAT-AP derivatives/extensions, such as DCAT-AP for High Value Datasets (DCAT-AP HVD), DCAT-AP to describe statistical datasets and statistical data in open data portals (StatDCAT-AP), DCAT-AP for describing geospatial datasets, dataset series and services (GeoDCAT-AP), DCAT-AP for base registries (BRegDCAT-AP), a mobility extension for the DCAT-AP (mobilityDCAT-AP), various national DCAT-APs (e.g. for Belgium, Finland, Germany, Italy, Spain, Norway or Poland) and several others, published or still under development (e.g. napDCAT-AP or HealthDCAT-AP).

There are a number of public resources available that someone interested in extending DCAT-AP could turn to, such as the Recommendations on how to extend DCAT-AP or the various SEMIC webinars held on this topic, like this one.

Large line

Identifying the problems

There are multiple problems that professionals have encountered and raised while trying to apply the recommendations in the Style Guide to the modelling of Application Profiles, such as DCAT-AP and its derivatives. The issues were not necessarily due to the inapplicability of the recommendations. Instead they brought to light some fundamental shortcomings of the existing models, which are not able to clearly encode the semantics of the model elements (classes, their attributes and relations) in the different usage context, and are not fully appropriate for generating a consistent set of data specification artefacts that represent the data specification (i.e. human readable representation, visual representation, RDF, SHACL shapes, etc). These problems are even more evident in the case of APs.

By following the discussions on various GitHub repositories that are hosting the data specifications currently under revision or development, and by engaging with SEMIC stakeholders (in WG meetings, bilateral discussions etc.), a number of problems were identified, as summarised below.

1. Vocabularies often come with a Default Application Profile

The specification of CVs usually include a “default” Application Profile, though in most cases this is not explicitly identified as such. 

For example, most cardinality restrictions and certain domain or range specifications in a vocabulary are often in fact restrictions that go beyond the simple definition of a vocabulary term, and belong to the specification of their usage in what the authors envisioned is the most generic context.

Examples: DCAT relies on the DCMI Metadata Terms (DC Terms) vocabulary for many of its key properties. One of these being dcterms:issued with a different label, namely “release date”, and a detailed range value description. Both of these changes are restrictions on the usage of this property in the DCAT context on top of what has been defined by DC Terms.

This phenomenon is not restricted only to properties reused from external vocabularies, but occurs also for properties within the DCAT namespace. For instance, dcat:mediaType is expressed in the RDF representation with a formal restriction of the range to the dcterms:MediaType and with the usage note that mandates the use of the IANA media types as codelist. Such strong restrictions for a mandatory codelist reduce the reuse potential, and therefore belong usually to an application profile, rather than a highly reusable vocabulary. However, it may also be intentional to reduce the use of the property within the scope of the DCAT ecosystem. Similarly the range of dcat:themeTaxonomy property is rdfs:Resource in the DCAT vocabulary, but there are recommendations for what the proper range should be.

Making such restrictions part of the vocabulary, and not explicitly as part of an application profile for that vocabulary, might interfere with the reuse of that vocabulary by other vocabularies or application profiles, thus leading to logical inconsistencies.

2. Problems related to the generation of data specification artefacts

The generation of published data specification artefacts varies from one data specification to another, and even within different versions of the same data specification.

There are different problems related to the generation of artefacts, such as:

    1. There is a varying set of artefacts that is published for different CVs or APs

Example: DC Terms publishes an HTML and an RDF representation, DCAT publishes an HTML, an RDF and a visual (UML style) representation, while DCAT-AP publishes HTML, RDF, UML and SHACL representations, with an additional support for JSON-LD data exchanges. This collection of artefacts is also evolving over time: in the past DCAT-AP was published as a PDF document. In the first versions there was no SHACL representation (SHACL was not yet defined), while today there is a collection of shapes each incorporating some more complex or conditional use aspects of the usage constraints expressed in the DCAT-AP specification, making them more fit for purpose.

2. The published artefacts of a data specification are not fully coherent among themselves

The main reason being that, more often than not, they involve a manual editing step of some of the artefacts following their automatic generation by a toolchain (or as it is in the case of DCAT, they are fully manually edited)

Examples: 

  • In DCAT-AP 2, where a manual process was applied, differences between the UML diagram, the tabular representation and the SHACL and RDF artefacts occurred frequently, leading to numerous bug reports.
  • In DCAT-AP 3 the internal coherence problem of the formal specification was resolved, by a conceptual model driven artefact generation using a toolchain, which reduced the manual adjustment of the HTML documentation to the minimum, e.g. addition of usage notes, or specifying the additional tables of code list with their level of requirement (mandatory, recommended, optional).
  • The UML diagram in the DCAT spec shows classes and relations between classes. The relations represent OWL ObjectProperties. The classes at the end of the relations in the UML diagrams will become the domain/range of that property in the HTML and RDF representation in some cases (e.g. dcat:distribution has both its domain and range specified), but in other cases they don’t (dcat:accessService does not have a domain specified). The same is valid for the attributes with a class provided as their type.

These varying sets of artefacts and lack of internal coherence lead to issues such as:

  • Users that need a specific type of artefact (e.g. RDF or SHACL) may find them in one (version of a) data specification, but not in others (e.g. there is an RDF file for DCAT-AP 2.1.1, but not in DCAT-AP 3.0.0, see GH issue #315; or earlier version of SEMIC CVs were generated by different editors using different tools, leading to different set of published artefacts that were often incoherent among themselves or some artefact even internally, like in case of statDCAT-AP see Editor’s Notes about namespace issues)
  • For a given data specification, users looking at one artefact can find different information than by looking at another artefact (e.g. UML diagram of DCAT-AP 2.1.1 had differences between its textual representation in the PDF, see GitHub issue #186 or issue #173)

Possible solutions include:

  • Automatic generation of all artefacts from a “single source of truth” can help
  • The “single source of truth” needs to provide appropriate conceptualisations to capture a) the full semantics of the concepts that are part of the given data specification, as well as b) all aspects (labels, URIs, usage notes) that go into the data specification artefacts
  • Ideally the “single source of truth” should be the conceptual model. However, not all CVs or APs start the modelling from an explicit Conceptual Model. For example, it is a common practice for most W3C Working Groups, including the one that built DCAT, and also in case of the development of some of the SEMIC data specification, such as GeoDCAT-AP (ref), that the focus is primarily on the (semi-automatic) creation of a coherent HTML representation, without the (preliminary or simultaneous) development of an explicit conceptual model.

3. Relationships between Application Profiles

Relationships between application profiles (APs) and the vocabularies and/or APs that they extend are often not clearly defined.

An application profile (AP) can be:

  • A “proper” profile on another vocabulary or AP (i.e. allowing only generalisations and further restrictions on existing elements), or
  • Compatible with another vocabulary or AP (i.e. an extension of a vocabulary or AP for a different purpose than the original one, but such that it is not conflicting with the original definitions), or
  • Partially overlapping with another vocabulary or AP, but conflicting

Examples:

  • DCAT-AP is a “proper” profile of DCAT, since their purpose is compatible, and none of the DCAT-AP definitions conflict with the definitions in DCAT. Similarly, DCAT-AP HVD is a “proper” profile on DCAT-AP.
  • The second case likely does not occur in the DCAT ecosystem, but it could happen in other domains, where one AP prescribed restrictions on different parts of a vocabulary or AP that they extend as another AP. This has to do mainly in domains where the base vocabulary has a large number of classes
  • DCAT-AP and GeoDCAT-AP are overlapping but have conflicting requirements on the range of the dcat:theme property

How do we formally encode the relationships between APs? Would the Profiles Vocabulary be appropriate and sufficient for this purpose?

4. Incoherence between Application Profiles and Vocabularies

The published artefacts of a data specification might not be fully coherent with the reused vocabularies or application profiles.

How can we ensure that:

  • The RDF of DCAT is compatible with the RDF of DCAT-AP, or 
  • The SHACL of DCAT-AP is coherent with the SHACL of mobilityDCAT-AP

For example, if DCAT-AP states that the mandatory codelist for dcat:theme is the EU Dataset Theme Vocabulary, then it is natural to expect that the SHACL provided by DCAT-AP would check if each value provided as dcat:theme is within that codelist. However, when any other profile would enforce the use of another codelist (which is a valid request), that profile’s SHACL representation would create a validation check that all values of dcat:theme are within the other codelist. Therefore, a Dataset that would adhere to DCAT-AP and the other profile would fail both SHACL representations, as at least one value of the dcat:theme will be outside the expected codelist. 

MobilityDCAT-AP avoided such inconsistency by adding a property mobilitydcatap:mobilityTheme, being a sub-property of dcat:theme. The new property links to a new controlled vocabulary, valid only for the mobility extension. In addition, the DCAT-AP property dcat:theme is still valid under the extension, and links to the controlled vocabulary prescribed by DCAT-AP. This way, both controlled vocabularies can be used without interference.

  • In case of SHACL validation, the inclusion of background knowledge such as the subclass relationships (expressed in DCAT RDF) may result in error-reporting of many false positives. 

For example, a DCAT-AP Catalogue is recommended to have a dcat:theme value. However, when the DCAT RDF background knowledge is included in addition to the SHACL, a value for dcat:theme will be required. This is non-intentional: the reason is that SHACL takes the subclass statements into account and that these subclass statements of DCAT.rdf are at a different level as the DCAT-AP constraints. One may think about disabling some inherited data shapes (see 'SHACL: Deactivating a Shape'), but that may lead to other unforeseen problems.

  • The resulting final definition of entities are logically consistent?

For example, if DCAT-AP sets the range of the dcat:theme property to Dataset Theme Vocabulary, and mobilityDCAT-AT, which is suppose to be an extension of DCAT-AP, allows additional values for the dcat:theme property, if a given dataset published according to the mobilityDCAT-AP profile specifies a theme other than the Transport Vocabulary, will that be a valid Dataset according to DCAT-AP?

What consistency level makes sense to be checked?

The requirement that a dataset description should be able to conform to multiple APs is fulfilled?

5. Disadvantages of Generalisation

Creating subclasses/subproperties of classes/properties from existing vocabularies and other APs is the safest approach if we want to make semantic adaptations in our own vocabulary/AP (see the related sections in the Style Guide). However, this might have several drawbacks in the context of interoperability and implementation, which can result in a negative effect on adoption. 

For more information, see the discussion in the Recommendation in the current Style Guide section below.

6. Lack of recommendations on the definition of Entity Profiles

There are no clear recommendations on how to define an entity profile such that a) it can be easily differentiated from the entity, itself, on which the profile applies and b) express multiple entity profiles on the same entity (e.g. dcat-ap:Dataset vs dcat-ap:DatasetInSeries). By entity we mean any class or property that is defined in a vocabulary, and which is referenced in an AP.

In particular:

  • There is no well defined and recommended way of expressing the semantics of an entity profile (as opposed to the semantics of an entity) - in general, and in the Style Guide in particular
  • It is not clear what would be the best solution to express such profiles in the technical documentation

This is a problem, as in the absence of such clear guidelines and recommendations, different groups will follow different methods with different results. As a consequence, there will be no coherent set of data specifications that users of multiple application profiles can easily navigate. The lack of clear recommendations on creating definitions of entity profiles will also prevent the development of a universally applicable methodology for application profile modelling, generation of specification artefacts, and for following established quality assurance steps.

Large line

Recommendation in the current Style Guide

The current Style Guide (v.1.0.0) has an entire section with recommendations regarding the reuse of existing vocabularies: 'SEMIC Style Guide: Clarification on reuse'.

That section defines what reuse means in the context of SEMIC, and looks concretely at the reuse of classes and properties in three different scenarios: reusing of them (i) as-is, (ii) with terminological adaptations, and (iii) with semantic adaptations. 

All three reuse scenarios can be applicable in the case of an Application Profile. While it is important to understand the recommendations for the reuse of entities “as is” and with “terminological adaptations”, it is even more important to look at what are the recommendations for the reuse with “semantic adaptation”. In a nutshell, the recommendation on reusing entities with semantic adaptation is to create a specialisation (i.e. subclass or subproperty) of the reused entity, and the Style Guide provides plenty of details on how to do this “properly”. 

This is a clean, generic solution that can work both in case of CVs and APs, and supports automatic generation of consistent sets of data specification artefacts as part of the data specification. However, in the case of Application Profiles, the creation of new (sub)classes, and recommending that these new subclasses be used to annotate the data, instead of the more generic superclasses that they extend, might result in practical hurdles in adoption. For example, if a dcat:DatasetInSeries is introduced as a subclass of dcat:Dataset in DCAT-AP, and the users of DCAT-AP are expected to annotate their data with dcat:DatasetInSeries instead of dcat:Dataset for datasets that are part of a dataset series, technical “interoperability” with other resources adopting DCAT-AP will require additional effort, such as one or more of the following:

  • The use of a reasoner and/or SHACL shape validator
  • Explicit assertion of multiple rdf:types (both dcat:DatasetInSeries and dcat:Dataset) in the metadata
  • An extra pre-processing step, either at the time of publication or at the time of filtering/processing to encode the knowledge that a resource published as dcat:DatasetInSeries in the DCAT-AP context is equivalent with dcat:Dataset
  • Implementation of some logic or rules at the processing time, to be able to recognize both dcat:Dataset and dcat:DatasetInSeries as representing dcat:Dataset instances
Large line

A new approach for modelling Application Profiles

There are many ways of modelling application profiles. These include various approaches (which can focus on the HTML document editing, RDF or SHACL specifications, UML modelling, specification of LinkML schemas, etc.) and the development and use of specialised tools (e.g. the tietomallit.suomi.fi portal used to develop Data Vocabularies, such as the Finnish DCAT-AP, in Finland, or EntryScape used in Sweden). 

A generic requirement that becomes evident in all the approaches that we have seen, is that, if we want to generate the data specification artefacts from a given model (as opposed to manually creating them, like in case of DCAT or GeoDCAT-AP), we should be able to provide a unique identifier (ideally a URI) to the application profile elements. Having a unique identifier of the profile elements that is separate from the identifier of the elements that they are modifying, allows one to make statements about the modifications and additional restrictions that are part of the APs definition, without changing the meaning of the original elements.

One way of creating separate identifiers for the profile elements, is to create new conceptual model entities (e.g. classes and properties), mainly through generalisation, as recommended in the Style Guide. We have covered briefly the advantages and disadvantages of this approach above. Another way would be to introduce a formal way to describe Application Profiles and its parts, including what is described above as Entity Profiles. Each entity profile would have a unique identifier and be expressed in RDF. We are investigating if such an expression could be based to a large extent on a subset of SHACL in combination with PROF. We will return to this topic in a later blog post.

For the time being, we describe the changes that apply to the reused entities by using features built in the UML language itself. We explored UML as a potential avenue, in order to stay in line as much as possible with the approach in the Style Guide regarding the building of conceptual models. We learned that, in fact, UML provides a clear set of concepts dedicated to modelling Profiles, which could nicely fit our need for modelling the differences between the classes that are reused from a vocabulary, and the additional restrictions that apply to them in the context of an Application Profile.

UML Profiles

Some background on UML Profiles: While Profiles were part of the UML language since UML v.1.1, Profile Diagrams were added in UML version 2.0, and appeared on the "official" taxonomy of UML diagrams in UML 2.2. From the “Profiles history and design requirements” section (18.1.2) of the OMG UML Superstructure document we learn that initially the “Profile mechanism has been defined specifically for providing a lightweight extension mechanism to the UML standard.” However, with the evolution of the standard, the notion of a “Profile” was defined to provide more structure and precision to the definition of Stereotypes and Tagged values, and starting with UML 2 the profiles became a well defined, specific, meta-modelling technique. Looking at the requirements that drove the specification of the profile semantics in UML, we can see that it aligns very well with the semantics of the APs in SEMIC. For example, we quote below their first requirement, which is one of the most relevant ones for us:

“A profile must provide mechanisms for specializing a reference metamodel (such as a set of UML packages) in such a way that the specialized semantics do not contradict the semantics of the reference metamodel. That is, profile constraints may typically define well-formedness rules that are more constraining (but consistent with) those specified by the reference metamodel.

The key elements of UML profile modelling are: profile packagesstereotypesmetadatatagged values, and enumerations. In addition to these, there are also more advanced features that can be used to define constraints, shape scripts and special attributes that define the default behaviour and appearance of the stereotyped elements. However, through experimentation, we came up with a very elegant, clean and minimalistic solution that can be used for modelling AP in a way that addresses many of the concerns above.

Proposed Solution

The key elements of our solution are:

  • class-application-profile stereotype that can be applied to classes, i.e. instances of the the Class metaclass 
UML1
  • property-application-profile stereotype that can be applied to relations and attributes, namely to the Generalization, Association, Dependency and Attribute metaclasses 
UML2
  • The creation of a <<profile>> package per AP that contains (at least) one Profile diagram and several classes marked with the class-application-profile stereotype, which might have relationships between them marked with the property-application-profile stereotype.
UML3
  • An application profile class will be represented by a UML element that is distinct from the UML element representing the “target” class, i.e. the class that they are an application profile on. If the “target” class is a regular class (i.e. an owl:Class or rdfs:Class in the RDF representation) the application profile class will be linked to the target class with a Realization type connector. If the “target” class is another application profile class, then they will be linked to the target class with a Generalization type connector.
UML4
  • Unless we are defining a “default” application profile of a vocabulary (as it is the case in our DCAT example), an application profile class should have a name, or provide another URI generation mechanism, that generates a distinct URI from its “target”. These URIs will be used to generate the SHACL shapes that apply on the target class in the context of this application profile. 
UML5
  • Properties used in an application profile can be represented as attributes or connectors in the Profile diagram, and similarly to the application profile classes they should:
    • have the property-application-profile stereotype
    • extend their target properties, if applicable, through Realization or Generalization type connectors (depending on the source of where they target properties)
    • generate distinct URI that can be used in the generation of SHACL shapes
UML6

For convenience, we provide our solution as a reusable UML Profile, exported in an XML file, and also as an Model Driven Generation (MDG) Technology file that can be integrated in UML modelling tools that have support for it, such as Enterprise Architect.

To ensure that our solution is usable in practice, and that it covers all the possible needs, we have created a full conceptual model of DCAT 3 in UML, where we separated the model of the vocabulary per se (as it is manifested in the RDF representation) from its “default application profile”, where they specify which properties and how should be used in the context of publishing data catalogues described with DCAT. With this exercise we were able to address what we see as one of the main shortcomings of the current W3C specification, and were able to generate conceptual models that we believe would allow the automatic generation of all data specification artefacts in a consistent fashion.

Moreover, in order to test that our solution is applicable in the generic use case where Application Profiles are often created by extending other APs (as it is the case in the many DACT-AP extensions) we have also (partially) re-created the conceptual model of DCAT-AP with this new approach. To allow the reader a more thorough investigation of this approach, the model is shared via a dedicated GitHub repository.

Large line

Discussion and conclusions

In this article we looked at the problem of modelling Application Profiles in general, as well as in the context of DCAT and DCAT-AP and other DCAT-AP extensions in particular. We were interested in identifying the common problems, and looked at possible ideas that would help our stakeholders model APs correctly, consistently and in accordance with the SEMIC Style Guide.

We realise that our analysis of the problems cannot be possibly complete, and that the approaches discussed and presented above are just some of the possible solutions. We are aware that there is a certain level of resistance towards the use of UML in the community. As was mentioned above, we are looking into not only formally describing this way of working in UML, but also as a combination of SHACL & PROF. We want to invite members of the community to try and translate the proposed (UML) solution to their own modelling approach. We hope that the community’s feedback can aid us to improve the creation and exchange of application profiles.

We also realise that encoding all the knowledge that is necessary to generate all the data specification artefact can make diagrams very complicated. Therefore, complex conceptual models should be split up into multiple simpler diagrams. We should always pay attention to not lose the message that we want to communicate to the business people.

It is worth noting that the modelling solution proposed in this article, although new, is fully compatible with the current SEMIC modelling. SEMIC already follows this modelling approach implicitly. What we propose is to make those implicit assumptions explicit. This will result in an editing process with enhanced clarity, offering better support for automated artefact generation, ultimately ensuring a higher level of correctness and consistency of the generated artefacts. It is also worth noting that the recommendations in this new modelling solution are compatible and complementary to the ones found in the current SEMIC Style Guide, and depending on the feedback received from the community we plan to integrate them into the Style Guide.

Therefore, we are looking forward to the community's reactions, thoughts, feedback and alternative proposals regarding this blog article. In particular we are very interested in hearing what the readers think about this approach, about their experiences with modelling APs, their experiences with similar or alternative approaches to address the problems listed above, what it would take to follow the proposed approach, etc?

Feedback can be provided as comments on this article, as issues on the Style Guide GitHub repository, where we shared our models, tagged with the label Blog-APModelling, or at various SEMIC meetings or webinars organised on related topics, such as the upcoming SEMIC Style Guide Webinar.

Balls