Logo do repositório
 
A carregar...
Logótipo do projeto
Projeto de investigação

Sem título

Autores

Publicações

Nested OSTRICH
Publication . Seco, João Costa; Lourenço, Hugo; Parreira, Joana; Ferreira, Carla; NOVALincs
Low-code frameworks strive to simplify and speed-up application development. Native support for the reuse and composition of parameterised coarse-grain components (templates) is essential to achieve these goals. OSTRICH-a rich template language for the OutSystems platform-was designed to simplify the use and creation of such templates. However, without a built-in composition mechanism, OSTRICH templates are hard to create and maintain. This paper presents a template composition mechanism and its typing and instantiation algorithms for model-driven low-code development environments. We evolve OSTRICH to support nested templates and allow the instantiation (hatching) of templates in the definition of other templates. Thus, we observe a significant increase code reuse potential, leading to a safer evolution of applications. The present definition seamlessly extends the existing Out-Systems metamodel with template constructs expressed by model annotations that maintain backward compatibility with the existing language toolchain. We present the metamodel, its annotations, and the corresponding validation and instantiation algorithms. In particular, we introduce a type-based validation procedure that ensures that using a template inside a template produces valid models. The work is validated using the OSTRICH benchmark. Our prototype is an extension of the OutSystems IDE allowing the annotation of models and their use to produce new models. We also analyse which existing OutSystems sample screens templates can be improved by using and sharing nested templates.
Causal Consistency Verification in Restful Systems
Publication . Rodrigues, Hugo Miguel Grilo; Preguiça, Nuno; Freitas, Filipe
Replicated systems cannot maintain both availability and (strong) consistency when exposed to network partitions. Strong consistency requires every read to return the last written value, which can lead clients to experience high latency or even timeout errors. Replicated applications usually rely on weak consistency, since clients can perform operations contacting a single replica, leading to decreased latency and increased availability. Causal consistency is a weak consistency model, however, it is the strongest one for highly available systems. Many applications are switching to this particular consistency model, since it ensures users never observe data items before they observe the ones that influenced their creation. Verifying if applications satisfy the consistency they claim to provide is no easy task. In this dissertation, we propose an algorithm to verify causal consistency in RESTful applications. Our approach adopts a black box testing, where multiple concurrent clients execute operations in a service and records the log of interactions. This log of interactions is then processed to verify if the results respect causal consistency. The key challenge is to infer causal dependencies among operations executed in different clients without adding any additional metadata to the data maintained by the service. When considering a particular operation, the algorithm builds a new dependency graph that considers one of the possible justifications the operation might have, but if this justification results in failure further ahead in the processing, it is necessary to build another graph considering another justification of that same operation. The algorithm relies on recursion in order to achieve this backtracking behaviour. If the algorithm is able to build a graph containing every operation present in the log, where the chosen justifications remain valid until the end of the processing, it outputs that the execution corresponding to that log satisfies causal consistency. The evaluation confirms that the algorithm is able to detect violations when feeding either small or large logs representing executions of RESTful applications that do not satisfy causal consistency.
A Semantic Consistency Model to Reduce Coordination in Replicated Systems
Publication . Gomes, Nuno Filipe Estêvão; Ferreira, Carla
Large-scale distributed applications need to be available and responsive to satisfy millions of users, which can be achieved by having data geo-replicated in multiple replicas. However, a partitioned system cannot sustain availability and consistency at fully. The usage of weak consistency models might lead to data integrity violations, triggered by problematic concurrent updates, such as selling twice the last ticket on a flight company service. To overcome possible conflicts, programmers might opt to apply strong consistency, which guarantees a total order between operations, while preserving data integrity. Nevertheless, the illusion of being a non-replicated system affects its availability. In contrast, weaker notions might be used, such as eventual consistency, that boosts responsiveness, as operations are executed directly at the source replica and their effects are propagated to remote replicas in the background. However, this approach might put data integrity at risk. Current protocols that preserve invariants rely on, at least, causal consistency, a consistency model that maintains causal dependencies between operations. In this dissertation, we propose a protocol that includes a semantic consistency model. This consistency model stands between eventual consistency and causal consistency. We guarantee better performance comparing with causal consistency, and ensure data integrity. Through semantic analysis, relying on the static analysis tool CISE3, we manage to limit the maximum number of dependencies that each operation will have. To support the protocol, we developed a communication algorithm in a cluster. Additionally, we present an architecture that uses Akka, an actor-based middleware in which actors communicate by exchanging messages. This architecture adopts the publish/subscribe pattern and includes data persistence. We also consider the stability of operations, as well as a dynamic cluster environment, ensuring the convergence of the replicated state. Finally, we perform an experimental evaluation regarding the performance of the algorithm using standard case studies. The evaluation confirms that by relying on semantic analysis, the system requires less coordination between the replicas than causal consistency, ensuring data integrity.
Robust Contract Evolution in a TypeSafe MicroService Architectures
Publication . Seco, João Costa; Ferreira, Paulo; Lourenço, Hugo; Ferreira, Carla; Ferrão, Lúcio; NOVALincs; AOSA Inc.
Microservice architectures allow for short deployment cycles and immediate effects, but offer no safety mechanisms for service contracts when they need to be changed. Maintaining the soundness of microservice architectures is an error-prone task that is only accessible to the most disciplined development teams. The strategy to evolve a producer service without disrupting its consumers is often to maintain multiple versions of the same interface and dealing with an explicitly managed handoff period and its inherent disadvantages. We present a microservice management system that statically verifies service interface signatures against their references and supports the seamless evolution of compatible interfaces. We define a compatibility relation΀on types that captures real evolution patterns and embodies known good practices on the evolution of interfaces. Namely, we allow for addition, removal, and renaming of data fields of a producer module without breaking or needing to upgrade consumer services. The evolution of interfaces is supported by runtime generated proxy components that dynamically adapt data exchanged between services to match with the statically checked service code. The model proposed in this paper is instantiated in a core programming language whose semantics is defined by a labelled transition system and a type system that prevents breaking changes from being deployed. Standard soundness results for the core language entail the existence of adapters, and hence ensure the absence of adaptation errors and the correctness of the management model. This adaptive approach allows for gradual deployment of modules, without halting the whole system and avoiding losing or misinterpreting data exchanged between system nodes. Experimental data shows that an average of 57% of deployments that would require adaptation and recompilation are safe under our approach.
VeriFx
Publication . De Porre, Kevin; Ferreira, Carla; Boix, Elisa Gonzalez; DI - Departamento de Informática
Distributed systems adopt weak consistency to ensure high availability and low latency, but state convergence is hard to guarantee due to conflicts. Experts carefully design replicated data types (RDTs) that resemble sequential data types and embed conflict resolution mechanisms that ensure convergence. Designing RDTs is challenging as their correctness depends on subtleties such as the ordering of concurrent operations. Currently, researchers manually verify RDTs, either by paper proofs or using proof assistants. Unfortunately, paper proofs are subject to reasoning flaws and mechanized proofs verify a formalization instead of a real-world implementation. Furthermore, writing mechanized proofs is reserved for verification experts and is extremely time-consuming. To simplify the design, implementation, and verification of RDTs, we propose VeriFx, a specialized programming language for RDTs with automated proof capabilities. VeriFx lets programmers implement RDTs atop functional collections and express correctness properties that are verified automatically. Verified RDTs can be transpiled to mainstream languages (currently Scala and JavaScript). VeriFx provides libraries for implementing and verifying Conflict-free Replicated Data Types (CRDTs) and Operational Transformation (OT) functions. These libraries implement the general execution model of those approaches and define their correctness properties. We use the libraries to implement and verify an extensive portfolio of 51 CRDTs, 16 of which are used in industrial databases, and reproduce a study on the correctness of OT functions.

Unidades organizacionais

Descrição

Palavras-chave

Contribuidores

Financiadores

Entidade financiadora

Fundação para a Ciência e a Tecnologia

Programa de financiamento

3599-PPCDT

Número da atribuição

PTDC/CCI-INF/32081/2017

ID