Model Separation Strategy
Act as the ONAP CLAMP Automation Composition Management developer.
Describe how the models are separated: JPA models, REST models, and Kafka models; so that any changes to the JPA models do not break backward compatibility with the other models.
Describe that adding new fields to an object of REST models or Kafka models does not break backward compatibility; however, any update or deletion of a field can potentially break backward compatibility if the field's value is expected.
Models
As an ONAP CLAMP Automation Composition Management (ACM) developer, I can describe the model separation strategy we use to maintain backward compatibility across the persistence (JPA), external API (REST), and asynchronous communication (Kafka) layers.
Model Separation Strategy
We adhere to a strict separation of concerns for our data models to decouple the internal storage format from the external communication protocols. This separation is crucial for evolving the system without breaking existing consumers.
JPA Models (Persistence Layer):
Purpose: These classes are strictly for database interaction and persistence. They map directly to tables in the database (via JPA annotations like
@Entity).Independence: They are isolated from all external interfaces (REST and Kafka). Any changes made here (e.g., changing a column name, altering data types, or adding indexes) do not directly affect the REST API or the Kafka message structure.
Backward Compatibility: Database schema migrations are handled internally (e.g., using Flyway or Liquibase). The JPA model is the most flexible layer, as its changes are contained within the ACM service.
REST Models (API Layer):
Purpose: These are Data Transfer Objects (DTOs) used for synchronous communication with external clients (UI, other microservices). They define the structure of the JSON payload for all HTTP requests and responses.
Independence: They are defined specifically for the REST API and are decoupled from the JPA models using Mapper classes (e.g., MapStruct or manual conversion logic). A change in a JPA field requires only an update to the mapper, not the REST DTO, unless the change is semantically required by the API contract.
Version Control: REST APIs should generally be versioned (e.g.,
/v1/,/v2/) to manage major, breaking changes.
Kafka Models (Asynchronous Communication Layer):
Purpose: These are the message models used for asynchronous, event-driven communication (e.g., messages between ACM and Participants, or between internal ACM components). They often use a defined schema language like Avro or JSON Schema.
Independence: Like REST models, they are decoupled from JPA models and are handled by dedicated message publishers/consumers and mappers.
Schema Registry: For true compatibility, Kafka message models are registered with a Schema Registry. This registry enforces rules for evolution (e.g., allowing additions but preventing incompatible changes) and ensures consumers can process messages from older or newer producers.
Model Type | Primary Concern | Coupling to JPA | Backward Compatibility Enforced by |
JPA | Database Schema | Direct (via annotations) | Internal schema migration tools |
REST | HTTP Payload (JSON) | Decoupled (via Mappers) | API Versioning, DTO evolution rules |
Kafka | Message Payload (Avro/JSON) | Decoupled (via Mappers) | Schema Registry rules (e.g., Avro) |
Backward Compatibility Rules for REST and Kafka Models
The stability of external models (REST DTOs and Kafka message schemas) relies on strictly adhering to specific principles for model evolution.
1. Adding New Fields (Non-Breaking Change)
Adding new, optional fields to an existing REST DTO or Kafka model does NOT break backward compatibility.
Old Producers to New Consumers:
The old producer (e.g., an older version of ACM) does not include the new field in its payload.
The new consumer (e.g., a new Participant) can safely deserialize the object because the new field is treated as optional (e.g., initialized to
nullin Java, or handled by a default value in Avro).
New Producers to Old Consumers:
The new producer includes the new field.
The old consumer receives the payload but simply ignores the unknown new field during deserialization. The consumer only extracts the fields it knows, and its logic remains unaffected.
2. Updating or Deleting Fields (Potential Breaking Change)
Any modification or deletion of an existing field is a potentially breaking change and requires careful management or versioning.
Type of Change | Compatibility Impact | Why it Breaks |
Deleting a Field | BREAKING | If an existing consumer expects and relies on the value of the deleted field for its core logic (e.g., a required ID, a crucial state indicator), its process will fail with a |
Renaming a Field | BREAKING | Renaming is, for all intents and purposes, the deletion of an old field and the addition of a new one. Old consumers will fail to find the field they expect. |
Changing Data Type | POTENTIALLY BREAKING | Changing a field from |
Conclusion: To avoid breakage, we generally treat all operations besides addition as a major version change for the affected model. If a field must be changed or removed, we introduce a new version of the REST endpoint or Kafka topic, allowing older clients to continue using the legacy models indefinitely.
Decoupling Business Logic from Persistence
Describe how database fields are getting decoupled from the business logic, in particular how UUID format fields and Maps are saved in database.
The database fields in the ONAP CLAMP ACM are decoupled from the business logic through the use of JPA Attribute Converters and a clear separation between the Concept Models (used by business logic) and the JPA Models (used for persistence).
This architecture ensures that the complex data structures used by the application (like UUID and Map<String, Object>) are cleanly translated into simple, persistent database types (like VARCHAR or TEXT) without contaminating the core business models with persistence details.
Decoupling Business Logic from Persistence
The decoupling is achieved in two main stages:
A. Model Separation
Concept Models (
AutomationComposition.java,AutomationCompositionElement.java): These are the pure Java objects used across the service's business logic, REST APIs, and Kafka models. They define fields using their intended Java types, such asjava.util.UUIDfor identifiers andjava.util.Mapfor configuration data. These models have no JPA annotations.JPA Models (
JpaAutomationComposition.java,JpaAutomationCompositionElement.java): These classes mirror the structure of the Concept Models but exist solely for JPA/Hibernate. They contain all the necessary persistence annotations (@Entity,@Table,@Column). Critically, fields that require special handling for persistence are annotated with@Convert.
B. The @Convert Annotation and Converters
The @Convert annotation is the tool that facilitates decoupling. It instructs JPA (Hibernate) to use a specific AttributeConverter class whenever reading from or writing to a particular column.
This converter acts as the translator layer, allowing the business logic to see one type (e.g., Map) while the database stores another (e.g., String).
2. Handling UUID Fields
In Java business logic, identifiers are typically represented as java.util.UUID. In the database, they are often stored as optimized VARCHAR(36) strings or sometimes a native UUID type.
Implementation:
Business Logic: The Concept Models use the standard
UUIDtype:Java// In AutomationComposition.java (Concept Model) private UUID instanceId;JPA Conversion: The JPA Models leverage a custom
AttributeConverter(often built into the JPA framework or a utility class if needed) to mapUUIDto aString(VARCHAR) representation in the database.Java// In JpaAutomationComposition.java (JPA Model) @Id @Column(name = "id") @Convert(converter = UuidConverter.class) // Assumed: converts UUID to String private UUID instanceId;Decoupling: The business code only ever deals with a
UUIDobject, guaranteeing type safety and correctness, regardless of how the database physically stores the 36-character identifier string.
3. Handling Maps (JSON Serialization)
Maps are complex data structures that contain key-value pairs of varying types (often used for element properties or out-properties). Since relational databases do not have a native, standard way to store arbitrary nested object structures, they must be converted into a serializable format.
The files provided demonstrate the use of the StringToMapConverter to serialize complex Java Map objects into a single JSON string in the database.
Implementation:
Business Logic: The Concept Models use the standard
Maptype:Java// In AutomationCompositionElement.java (Concept Model) private Map<String, Object> properties = new LinkedHashMap<>();The Converter: The
StringToMapConverterextendsAbstractConverterand implementsAttributeConverter<Map<String, Object>, String>.convertToDatabaseColumn(Map<String, Object> map): This method takes the JavaMap<String, Object>and uses the internalcoder(StandardCoder, likely a wrapper for Jackson or Gson) to serialize it into a JSON string (encode(map)). This string is what is saved in the database column (e.g., asTEXT).convertToEntityAttribute(String dbData): This method takes the raw JSON string from the database and uses thecoderto deserialize it back into aMap<String, Object>(decode(dbData, Map.class)), which is then used by the business logic.
JPA Application: The JPA Model applies this converter to the field:
Java// In JpaAutomationCompositionElement.java (JPA Model) @Convert(converter = StringToMapConverter.class) @Column(name = "properties", columnDefinition = "TEXT") private Map<String, Object> properties;Decoupling: The business logic can manipulate the
propertiesmap as a standard Java object. The internal persistence mechanism transparently handles the conversion to and from JSON, effectively decoupling the business logic from the underlyingTEXTcolumn used to store the data.