Integration refers to the process of combining software parts (or subsystems) into one system. An integration framework is a lightweight utility that provides libraries and standardized methods to coordinate messaging among different technologies. As software connects the world in increasingly more complex ways, integration makes it all possible facilitating app-to-app communication. Learn more about this necessity for modern software development by keeping a pulse on the industry topics such as integrated development environments, API best practices, service-oriented architecture, enterprise service buses, communication architectures, integration testing, and more.
CORS Anywhere on Pure NGINX Config
Integration of AI Tools With SAP ABAP Programming
Docker has become an essential tool for developers, offering consistent and isolated environments without installing full-fledged products locally. The ideal setup for microservice development using Spring Boot with MySQL as the backend often involves a remotely hosted database. However, for rapid prototyping or local development, running a MySQL container through Docker offers a more streamlined approach. I encountered a couple of issues while attempting to set up this configuration with the help of Docker Desktop for a proof of concept. An online search revealed a lack of straightforward guides on integrating Spring Boot microservices with MySQL in Docker Desktop; most resources primarily focus on containerizing the Spring Boot application. Recognizing this gap, I decided to write this short article. Prerequisites Before diving in, we must have the following: A foundational understanding of Spring Boot and microservices architecture Familiarity with Docker containers Docker Desktop installed on our machine Docker Desktop Setup We can install Docker Desktop using this link. Installation is straightforward and includes steps that can be navigated efficiently, as illustrated in the accompanying screenshots. Configuring MySQL Container Once we have installed the Docker desktop when we launch, we will get through some standard questions, and we can skip the registration part. Once the desktop app is ready, then we need to search for the MySQL container, as shown below: We need to click Pull and then Run the container. Once you run the container, the settings dialog will pop up, as shown below. Please enter the settings as below: MYSQL_ROOT_PASSWORD: This environment variable specifies the password that will be set for the MySQL root superuser account. MYSQL_DATABASE: This environment variable allows us to specify the name of a database that will be created on image startup. If a user/password was supplied (see below), that user will be granted superuser access (corresponding to GRANT ALL) to this database. MYSQL_USER, MYSQL_PASSWORD: These variables are used to create a new user and set that user's password. This user will be granted superuser permissions for the database specified by the MYSQL_DATABASE variable. Upon running the container, Docker Desktop displays logs indicating the container's status. We can now connect to the MySQL instance using tools like MySQL Workbench to manage database objects. Spring Application Configuration In the Spring application, we can configure the configurations below in the application.properties. YAML spring.esign.datasource.jdbc-url=jdbc:mysql://localhost:3306/e-sign?allowPublicKeyRetrieval=true&useSSL=false spring.esign.datasource.username=e-sign spring.esign.datasource.password=Password1 We opted for a custom prefix spring.esign over the default spring.datasource for our database configuration within the Spring Boot application. This approach shines in scenarios where the application requires connections to multiple databases. To enable this custom configuration, we need to define the Spring Boot configuration class ESignDbConfig: Java @Configuration @EnableTransactionManagement @EnableJpaRepositories( entityManagerFactoryRef = "eSignEntityManagerFactory", transactionManagerRef = "eSignTransactionManager", basePackages ="com.icw.esign.repository") public class ESignDbConfig { @Bean("eSignDataSource") @ConfigurationProperties(prefix="spring.esign.datasource") public DataSource geteSignDataSource(){ return DataSourceBuilder.create().type(HikariDataSource.class).build(); } @Bean(name = "eSignEntityManagerFactory") public LocalContainerEntityManagerFactoryBean eSignEntityManagerFactory( EntityManagerFactoryBuilder builder, @Qualifier("eSignDataSource") DataSource dataSource) { return builder.dataSource(dataSource).packages("com.icw.esign.dao") .build(); } @Bean(name = "eSignTransactionManager") public PlatformTransactionManager eSignTransactionManager(@Qualifier("eSignEntityManagerFactory") EntityManagerFactory entityManagerFactory) { return new JpaTransactionManager(entityManagerFactory); } } @Bean("eSignDataSource"): This method defines a Spring bean for the eSign module's data source. The @ConfigurationProperties(prefix="spring.esign.datasource") annotation is used to automatically map and bind all configuration properties starting with spring.esign.datasource from the application's configuration files (like application.properties or application.yml) to this DataSource object. The method uses DataSourceBuilder to create and configure a HikariDataSource, a highly performant JDBC connection pool. This implies that the eSign module will use a dedicated database whose connection parameters are isolated from other modules or the main application database. @Bean(name = "eSignEntityManagerFactory"): This method creates a LocalContainerEntityManagerFactoryBean, which is responsible for creating the EntityManagerFactory. This factory is crucial for managing JPA entities specific to the eSign module. The EntityManagerFactory is configured to use the eSignDataSource for its database operations and to scan the package com.icw.esign.dao for entity classes. This means that only entities in this package or its subpackages will be managed by this EntityManagerFactory and thus, can access the eSign database. @Bean(name = "eSignTransactionManager"): This defines a PlatformTransactionManager specific way of managing transactions of the eSignmodule's EntityManagerFactory. This transaction manager ensures that all database operations performed by entities managed by the eSignEntityManagerFactory are wrapped in transactions. It enables the application to manage transaction boundaries, roll back operations on failures, and commit changes when operations succeed. Repository Now that we have defined configurations, we can create repository classes and build other objects required for the API endpoint. Java @Repository public class ESignDbRepository { private static final Logger logger = LoggerFactory.getLogger(ESignDbRepository.class); @Qualifier("eSignEntityManagerFactory") @Autowired private EntityManager entityManager; @Autowired ObjectMapper objectMapper; String P_GET_DOC_ESIGN_INFO = "p_get_doc_esign_info"; public List<DocESignMaster> getDocumentESignInfo(String docUUID) { StoredProcedureQuery proc = entityManager.createStoredProcedureQuery(P_GET_DOC_ESIGN_INFO, DocESignMaster.class); proc.registerStoredProcedureParameter("v_doc_uuid", String.class, ParameterMode.IN); proc.setParameter("v_doc_uuid", docUUID); try { return (List<DocESignMaster>) proc.getResultList(); } catch (PersistenceException ex) { logger.error("Error while fetching document eSign info for docUUID: {}", docUUID, ex); } return Collections.emptyList(); } } @Qualifier("eSignEntityManagerFactory"): Specifies which EntityManagerFactory should be used to create EntityManager, ensuring that the correct database configuration is used for eSign operations. Conclusion Integrating Spring Boot microservices with Docker Desktop streamlines microservice development and testing. This guide walks through the essential steps of setting up a Spring Boot application and ensuring seamless service communication with a MySQL container hosted on the Docker Desktop application. This quick setup guide is useful for proof of concept or setting up an isolated local development environment.
In the world of Spring Boot, making HTTP requests to external services is a common task. Traditionally, developers have relied on RestTemplate for this purpose. However, with the evolution of the Spring Framework, a new and more powerful way to handle HTTP requests has emerged: the WebClient. In Spring Boot 3.2, a new addition called RestClient builds upon WebClient, providing a more intuitive and modern approach to consuming RESTful services. Origins of RestTemplate RestTemplate has been a staple in the Spring ecosystem for years. It's a synchronous client for making HTTP requests and processing responses. With RestTemplate, developers could easily interact with RESTful APIs using familiar Java syntax. However, as applications became more asynchronous and non-blocking, the limitations of RestTemplate started to become apparent. Here's a basic example of using RestTemplate to fetch data from an external API: Java var restTemplate = new RestTemplate(); var response = restTemplate.getForObject("https://api.example.com/data", String.class); System.out.println(response); Introduction of WebClient With the advent of Spring WebFlux, an asynchronous, non-blocking web framework, WebClient was introduced as a modern alternative to RestTemplate. WebClient embraces reactive principles, making it well-suited for building reactive applications. It offers support for both synchronous and asynchronous communication, along with a fluent API for composing requests. Here's how you would use WebClient to achieve the same HTTP request: Java var webClient = WebClient.create(); var response = webClient.get() .uri("https://api.example.com/data") .retrieve() .bodyToMono(String.class); response.subscribe(System.out::println); Enter RestClient in Spring Boot 3.2 Spring Boot 3.2 brings RestClient, a higher-level abstraction built on top of WebClient. RestClient simplifies the process of making HTTP requests even further by providing a more intuitive fluent API and reducing boilerplate code. It retains all the capabilities of WebClient while offering a more developer-friendly interface. Let's take a look at how RestClient can be used: var response = restClient .get() .uri(cepURL) .retrieve() .toEntity(String.class); System.out.println(response.getBody()); With RestClient, the code becomes more concise and readable. The RestClient handles the creation of WebClient instances internally, abstracting away the complexities of setting up and managing HTTP clients. Comparing RestClient With RestTemplate Let's compare RestClient with RestTemplate by looking at some common scenarios: Create RestTemplate: var response = new RestTemplate(); RestClient: var response = RestClient.create(); Or we can use our old RestTemplate as well: var myOldRestTemplate = new RestTemplate(); var response = RestClient.builder(myOldRestTemplate); GET Request RestTemplate: Java var response = restTemplate.getForObject("https://api.example.com/data", String.class); RestClient: var response = restClient .get() .uri(cepURL) .retrieve() .toEntity(String.class); POST Request RestTemplate: Java ResponseEntity<String> response = restTemplate.postForEntity("https://api.example.com/data", request, String.class); RestClient: var response = restClient .post() .uri("https://api.example.com/data") .body(request) .retrieve() .toEntity(String.class); Error Handling RestTemplate: Java try { String response = restTemplate.getForObject("https://api.example.com/data", String.class); } catch (RestClientException ex) { // Handle exception } RestClient: String request = restClient.get() .uri("https://api.example.com/this-url-does-not-exist") .retrieve() .onStatus(HttpStatusCode::is4xxClientError, (request, response) -> { throw new MyCustomRuntimeException(response.getStatusCode(), response.getHeaders()) }) .body(String.class); As seen in these examples, RestClient offers a more streamlined approach to making HTTP requests compared to RestTemplate. Spring Documentation gives us many other examples. Conclusion In Spring Boot 3.2, RestClient emerges as a modern replacement for RestTemplate, offering a more intuitive and concise way to consume RESTful services. Built on top of WebClient, RestClient embraces reactive principles while simplifying the process of making HTTP requests. Developers can now enjoy improved productivity and cleaner code when interacting with external APIs in their Spring Boot applications. It's recommended to transition from RestTemplate to RestClient for a more efficient and future-proof codebase.
NoSQL databases provide a flexible and scalable option for storing and retrieving data in database management. However, they can need help with object-oriented programming paradigms, such as inheritance, which is a fundamental concept in languages like Java. This article explores the impedance mismatch when dealing with inheritance in NoSQL databases. The Inheritance Challenge in NoSQL Databases The term “impedance mismatch” refers to the disconnect between the object-oriented world of programming languages like Java and NoSQL databases’ tabular, document-oriented, or graph-based structures. One area where this mismatch is particularly evident is in handling inheritance. In Java, inheritance allows you to create a hierarchy of classes, where a subclass inherits properties and behaviors from its parent class. This concept is deeply ingrained in Java programming and is often used to model real-world relationships. However, NoSQL databases have no joins, and the inheritance structure needs to be handled differently. Jakarta Persistence (JPA) and Inheritance Strategies Before diving into more advanced solutions, it’s worth mentioning that there are strategies to simulate inheritance in relational databases in the world of Jakarta Persistence (formerly known as JPA). These strategies include: JOINED inheritance strategy: In this approach, fields specific to a subclass are mapped to a separate table from the fields common to the parent class. A join operation is performed to instantiate the subclass when needed. SINGLE_TABLE inheritance strategy: This strategy uses a single table representing the entire class hierarchy. Discriminator columns are used to differentiate between different subclasses. TABLE_PER_CLASS inheritance strategy: Each concrete entity class in the hierarchy corresponds to its table in the database. These strategies work well in relational databases but are not directly applicable to NoSQL databases, primarily because NoSQL databases do not support traditional joins. Live Code Session: Java SE, Eclipse JNoSQL, and MongoDB In this live code session, we will create a Java SE project using MongoDB as our NoSQL database. We’ll focus on managing game characters, specifically Mario and Sonic characters, using Eclipse JNoSQL. You can run MongoDB locally using Docker or in the cloud with MongoDB Atlas. We’ll start with the database setup and then proceed to the Java code implementation. Setting Up MongoDB Locally To run MongoDB locally, you can use Docker with the following command: Shell docker run -d --name mongodb-instance -p 27017:27017 mongo Alternatively, you can choose to execute it in the cloud by following the instructions provided by MongoDB Atlas. With the MongoDB database up and running, let’s create our Java project. Creating the Java Project We’ll create a Java SE project using Maven and the maven-archetype-quickstart archetype. This project will utilize the following technologies and dependencies: Jakarta CDI Jakarta JSONP Eclipse MicroProfile Eclipse JNoSQL database Maven Dependencies Add the following dependencies to your project’s pom.xml file: XML <dependencies> <dependency> <groupId>org.jboss.weld.se</groupId> <artifactId>weld-se-shaded</artifactId> <version>${weld.se.core.version}</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.eclipse</groupId> <artifactId>yasson</artifactId> <version>3.0.3</version> <scope>compile</scope> </dependency> <dependency> <groupId>io.smallrye.config</groupId> <artifactId>smallrye-config-core</artifactId> <version>3.2.1</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.eclipse.microprofile.config</groupId> <artifactId>microprofile-config-api</artifactId> <version>3.0.2</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.eclipse.jnosql.databases</groupId> <artifactId>jnosql-mongodb</artifactId> <version>${jnosql.version}</version> </dependency> <dependency> <groupId>net.datafaker</groupId> <artifactId>datafaker</artifactId> <version>2.0.2</version> </dependency> </dependencies> Make sure to replace ${jnosql.version} with the appropriate version of Eclipse JNoSQL you intend to use. In the next section, we will proceed with implementing our Java code. Implementing Our Java Code Our GameCharacter class will serve as the parent class for all game characters and will hold the common attributes shared among them. We’ll use inheritance and discriminator columns to distinguish between Sonic’s and Mario’s characters. Here’s the initial definition of the GameCharacter class: Java @Entity @DiscriminatorColumn("type") @Inheritance public abstract class GameCharacter { @Id @Convert(UUIDConverter.class) protected UUID id; @Column protected String character; @Column protected String game; public abstract GameType getType(); } In this code: We annotate the class with @Entity to indicate that it is a persistent entity in our MongoDB database. We use @DiscriminatorColumn("type") to specify that a discriminator column named “type” will be used to differentiate between subclasses. @Inheritance indicates that this class is part of an inheritance hierarchy. The GameCharacter class has a unique identifier (id), attributes for character name (character) and game name (game), and an abstract method getType(), which its subclasses will implement to specify the character type. Specialization Classes: Sonic and Mario Now, let’s create the specialization classes for Sonic and Mario entities. These classes will extend the GameCharacter class and provide additional attributes specific to each character type. We’ll use @DiscriminatorValue to define the values the “type” discriminator column can take for each subclass. Java @Entity @DiscriminatorValue("SONIC") public class Sonic extends GameCharacter { @Column private String zone; @Override public GameType getType() { return GameType.SONIC; } } In the Sonic class: We annotate it with @Entity to indicate it’s a persistent entity. @DiscriminatorValue("SONIC") specifies that the “type” discriminator column will have the value “SONIC” for Sonic entities. We add an attribute zone-specific to Sonic characters. The getType() method returns GameType.SONIC, indicating that this is a Sonic character. Java @Entity @DiscriminatorValue("MARIO") public class Mario extends GameCharacter { @Column private String locations; @Override public GameType getType() { return GameType.MARIO; } } Similarly, in the Mario class: We annotate it with @Entity to indicate it’s a persistent entity. @DiscriminatorValue("MARIO") specifies that the “type” discriminator column will have the value “MARIO” for Mario entities. We add an attribute locations specific to Mario characters. The getType() method returns GameType.MARIO, indicating that this is a Mario character. With this modeling approach, you can easily distinguish between Sonic and Mario characters in your MongoDB database using the discriminator column “type.” We will create our first database integration with MongoDB using Eclipse JNoSQL. To simplify, we will generate data using the Data Faker library. Our Java application will insert Mario and Sonic characters into the database and perform basic operations. Application Code Here’s the main application code that generates and inserts data into the MongoDB database: Java public class App { public static void main(String[] args) { try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { DocumentTemplate template = container.select(DocumentTemplate.class).get(); DataFaker faker = new DataFaker(); Mario mario = Mario.of(faker.generateMarioData()); Sonic sonic = Sonic.of(faker.generateSonicData()); // Insert Mario and Sonic characters into the database template.insert(List.of(mario, sonic)); // Count the total number of GameCharacter documents long count = template.count(GameCharacter.class); System.out.println("Total of GameCharacter: " + count); // Find all Mario characters in the database List<Mario> marioCharacters = template.select(Mario.class).getResultList(); System.out.println("Find all Mario characters: " + marioCharacters); // Find all Sonic characters in the database List<Sonic> sonicCharacters = template.select(Sonic.class).getResultList(); System.out.println("Find all Sonic characters: " + sonicCharacters); } } } In this code: We use the SeContainer to manage our CDI container and initialize the DocumentTemplate from Eclipse JNoSQL. We create instances of Mario and Sonic characters using data generated by the DataFaker class. We insert these characters into the MongoDB database using the template.insert() method. We count the total number of GameCharacter documents in the database. We retrieve and display all Mario and Sonic characters from the database. Resulting Database Structure As a result of running this code, you will see data in your MongoDB database similar to the following structure: JSON [ { "_id": "39b8901c-669c-49db-ac42-c1cabdcbb6ed", "character": "Bowser", "game": "Super Mario Bros.", "locations": "Mount Volbono", "type": "MARIO" }, { "_id": "f60e1ada-bfd9-4da7-8228-6a7f870e3dc8", "character": "Perfect Chaos", "game": "Sonic Rivals 2", "type": "SONIC", "zone": "Emerald Hill Zone" } ] As shown in the database structure, each document contains a unique identifier (_id), character name (character), game name (game), and a discriminator column type to differentiate between Mario and Sonic characters. You will see more characters in your MongoDB database depending on your generated data. This integration demonstrates how to insert, count, and retrieve game characters using Eclipse JNoSQL and MongoDB. You can extend and enhance this application to manage and manipulate your game character data as needed. We will create repositories for managing game characters using Eclipse JNoSQL. We will have a Console repository for general game characters and a SonicRepository specifically for Sonic characters. These repositories will allow us to interact with the database and perform various operations easily. Let’s define the repositories for our game characters. Console Repository Java @Repository public interface Console extends PageableRepository<GameCharacter, UUID> { } The Console repository extends PageableRepository and is used for general game characters. It provides common CRUD operations and pagination support. Sonic Repository Java @Repository public interface SonicRepository extends PageableRepository<Sonic, UUID> { } The SonicRepository extends PageableRepository but is specifically designed for Sonic characters. It inherits common CRUD operations and pagination from the parent repository. Main Application Code Now, let’s modify our main application code to use these repositories. For Console Repository Java public static void main(String[] args) { Faker faker = new Faker(); try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { Console repository = container.select(Console.class).get(); for (int index = 0; index < 5; index++) { Mario mario = Mario.of(faker); Sonic sonic = Sonic.of(faker); repository.saveAll(List.of(mario, sonic)); } long count = repository.count(); System.out.println("Total of GameCharacter: " + count); System.out.println("Find all game characters: " + repository.findAll().toList()); } System.exit(0); } In this code, we use the Console repository to save both Mario and Sonic characters, demonstrating its ability to manage general game characters. For Sonic Repository Java public static void main(String[] args) { Faker faker = new Faker(); try (SeContainer container = SeContainerInitializer.newInstance().initialize()) { SonicRepository repository = container.select(SonicRepository.class).get(); for (int index = 0; index < 5; index++) { Sonic sonic = Sonic.of(faker); repository.save(sonic); } long count = repository.count(); System.out.println("Total of Sonic characters: " + count); System.out.println("Find all Sonic characters: " + repository.findAll().toList()); } System.exit(0); } This code uses the SonicRepository to save Sonic characters specifically. It showcases how to work with a repository dedicated to a particular character type. With these repositories, you can easily manage, query, and filter game characters based on their type, simplifying the code and making it more organized. Conclusion In this article, we explored the seamless integration of MongoDB with Java using the Eclipse JNoSQL framework for efficient game character management. We delved into the intricacies of modeling game characters, addressing challenges related to inheritance in NoSQL databases while maintaining compatibility with Java's object-oriented principles. By employing discriminator columns, we could categorize characters and store them within the MongoDB database, creating a well-structured and extensible solution. Through our Java application, we demonstrated how to generate sample game character data using the Data Faker library and efficiently insert it into MongoDB. We performed essential operations, such as counting the number of game characters and retrieving specific character types. Moreover, we introduced the concept of repositories in Eclipse JNoSQL, showcasing their value in simplifying data management and enabling focused queries based on character types. This article provides a solid foundation for harnessing the power of Eclipse JNoSQL and MongoDB to streamline NoSQL database interactions in Java applications, making it easier to manage and manipulate diverse data sets. Source code
Statelessness in RESTful applications poses challenges and opportunities, influencing how we manage fundamental security aspects such as authentication and authorization. This blog aims to delve into this topic, explore its impact, and offer insights into the best practices for handling stateless REST applications. Understanding Statelessness in REST REST, or REpresentational State Transfer, is an architectural style that defines a set of constraints for creating web services. One of its core principles is statelessness, which means that each request from a client to a server must contain all the information needed to understand and process the request. This model stands in contrast to stateful approaches, where the server stores user session data between requests. The stateless nature of REST brings significant benefits, particularly in terms of scalability and reliability. By not maintaining a state between requests, RESTful services can handle requests independently, allowing for more efficient load balancing and reduced server memory requirements. However, this approach introduces complexities in managing user authentication and authorization. Authentication in Stateless REST Applications Token-Based Authentication The most common approach to handling authentication in stateless REST applications is through token-based methods, like JSON Web Tokens (JWT). In this model, the server generates a token that encapsulates user identity and attributes when they log in. This token is then sent to the client, which will include it in the HTTP header of subsequent requests. Upon receiving a request, the server decodes the token to verify user identity. Finally, the authorization service can make decisions based on the user permissions. // Example of a JWT token in an HTTP header Authorization: Bearer <token> OAuth 2.0 Another widely used framework is OAuth 2.0, particularly for applications requiring third-party access. OAuth 2.0 allows users to grant limited access to their resources from another service without exposing their credentials. It uses access tokens, providing layered security and enabling scenarios where an application needs to act on behalf of the user. Authorization in Stateless REST Applications Once authentication is established, the next challenge is authorization — checking the user has permission to perform the relevant actions on resources. Keeping REST applications stateless requires decoupling policy and code. In traditional stateful applications, authorization decisions are made in imperative code statements that clutter the application logic and rely on the state of the request. In a stateless application, policy logic should be separated from the application code and be defined separately as policy code (using policy as code engines and languages), thus keeping the application logic stateless. Here are some examples of stateless implementation of common policy models: Role-Based Access Control (RBAC) Role-Based Access Control (RBAC) is a common pattern where users are assigned roles that dictate the access level a user has to resources. When decoupling policy from the code, the engine syncs the user roles from the identity provider. By providing the JWT with the identity, the policy engine can return a decision on whether a role is allowed to perform the action or not. Attribute-Based Access Control (ABAC) A more dynamic approach is Attribute-Based Access Control (ABAC), which evaluates a set of policies against the attributes of users, resources, and the environment. This model offers more granular control and flexibility, which is particularly useful in complex systems with varying access requirements. To keep REST applications stateless, it is necessary to declare these policies in a separate code base as well as ensure that the data synchronization with the engine is stateless. Relationship-Based Access Control (ReBAC) In applications where data privacy is of top importance, and users can have ownership of their data by declaring relationships, Using a centralized graph outside of the REST application is necessary to maintain the statelessness of the application logic. A well-crafted implementation of an authorization service will have the application throw a stateless check function with the identity and resource instance. Then, the authorization service will analyze it based on the stateful graph separated from the application. Security Considerations in Stateless Authentication and Authorization Handling Token Security In stateless REST applications, token security is critical, and developers must ensure that tokens are encrypted and transmitted securely. The use of HTTPS is mandatory to prevent token interception. Additionally, token expiration mechanisms must be implemented to reduce the risk of token hijacking. It’s a common practice to have short-lived access tokens and longer-lived refresh tokens to balance security and user convenience. Preventing CSRF and XSS Attacks Cross-Site Request Forgery (CSRF) and Cross-Site Scripting (XSS) are two prevalent security threats in web applications. Using tokens instead of cookies in stateless REST APIs can inherently mitigate CSRF attacks, as the browser does not automatically send the token. However, developers must still be vigilant about XSS attacks, which can compromise token security. Implementing Content Security Policy (CSP) headers and sanitizing user input are effective strategies against XSS. Performance Implications Caching Strategies Statelessness in REST APIs poses unique challenges for caching, as user-specific data cannot be stored on the server. Leveraging HTTP cache headers effectively allows clients to cache responses appropriately, reducing the load on the server and improving response times. ETag headers and conditional requests can optimize bandwidth usage and enhance overall application performance. Load Balancing and Scalability Stateless applications are inherently more scalable as they allow for straightforward load balancing. Since there’s no session state tied to a specific server, any server can handle any request. This property enables seamless horizontal scaling, which is essential for applications anticipating high traffic volumes. Conclusion: Balancing Statelessness With Practicality Implementing authentication and authorization in stateless REST applications involves a careful balance between security, performance, and usability. While statelessness offers numerous advantages in terms of scalability and simplicity, it also necessitates robust security measures and thoughtful system design. The implications of token-based authentication, access control mechanisms, security threats, and performance strategies must be considered to build effective and secure RESTful services.
In the fast-changing world of software development, a disruptive technique has acquired significant traction: API-First Development. This strategy substantially transforms the old application development paradigms, putting Application Programming Interfaces (APIs) at the center of the development lifecycle. Understanding API-First Development API-First Development is more than a development approach; it’s a concept that changes the way we think about, create, and implement software. At its core, API-First Development encourages developers to prioritize the establishment of APIs as basic building blocks before moving on to other elements of program development. Why API-First? Historically, APIs were often considered secondary and implemented after the core functionalities or user interfaces were defined. However, this approach often led to inefficiencies, with APIs struggling to meet the evolving needs of applications. API-First Development acknowledges the critical role APIs play in today’s interconnected digital landscape and proposes a radical shift in perspective. Agile and Iterative Development API-First Development aligns seamlessly with agile development methodologies, emphasizing iterative and collaborative processes. By defining APIs at the outset, teams can work in parallel, ensuring that backend services and frontend interfaces evolve harmoniously. This not only accelerates development timelines but also promotes adaptability to changing requirements. Seamless Integration In a world where applications increasingly rely on third-party services, cloud platforms, and diverse devices, seamless integration is paramount. APIs act as the glue that binds these components together. Prioritizing API design ensures that integration points are well-defined, making it easier for developers to connect different parts of the system reliably. Reusability and Scalability Well-designed APIs facilitate the reusability of code components. Instead of reinventing the wheel for each project, developers can leverage existing APIs, promoting efficiency and consistency across applications. This reusability factor significantly contributes to scalability, allowing organizations to build upon proven components as they grow. Key Principles of API-First Development Clear API Design: API-First begins with a clear and comprehensive API design. OpenAPI Specification (OAS) or RAML (RESTful API Modeling Language) are commonly used tools for designing and documenting APIs effectively. These design documents act as a contract between backend and frontend teams, providing a shared understanding of how the application will function. Mocking and Testing: Once the API design is complete, developers create mock APIs to simulate the behavior of the actual services. This early testing phase helps identify any issues or mismatches between design and implementation before substantial development efforts are invested. Tools like Postman or Swagger are invaluable for API testing and validation. Parallel Development: With well-defined APIs and mock services in place, development teams can work concurrently on the backend and frontend. This parallel development approach accelerates the overall project timeline and allows for more agile responses to changing requirements. Continuous Monitoring and Iteration: API-First Development doesn’t end with the initial implementation. Continuous monitoring of API performance, user feedback, and system requirements is crucial. Iterative updates to the API design and implementation ensure that the software remains responsive to evolving needs. The Benefits of an API-First Approach Adopting an API-First approach offers a multitude of benefits that resonate throughout the entire software development lifecycle. Let’s explore these advantages in detail. 1. Enhanced Collaboration API-First Development fosters collaboration between different teams within an organization. By establishing clear and standardized API specifications at the outset, developers, designers, and stakeholders can work concurrently and effectively. APIs act as a common language that facilitates communication between diverse teams, bridging the gap between backend and frontend development. Collaboration is further enhanced by providing a shared understanding of the application’s functionality. The API design document becomes a central reference point, ensuring that all teams are aligned in their objectives. This collaborative synergy reduces miscommunication, accelerates development cycles, and ultimately leads to the delivery of more cohesive and integrated software solutions. 2. Flexibility and Adaptability API-First Development instills flexibility and adaptability into the core of the software architecture. APIs designed with this approach are inherently modular and loosely coupled, allowing for easier modifications and updates. The separation of concerns between backend and frontend components enables teams to make changes independently, promoting agility in responding to evolving requirements. In a rapidly changing technological landscape, where innovation and market demands drive constant updates, the ability to adapt quickly is paramount. API-First Development positions organizations to embrace change seamlessly, ensuring that their software remains relevant and capable of meeting evolving user expectations. 3. Improved User Experience The decoupling of backend logic and frontend interfaces in API-First Development results in an improved user experience. Frontend developers can iterate on the user interface independently without being constrained by the backend implementation details. This separation allows for more rapid prototyping, testing, and refinement of the user interface, ultimately leading to a more responsive and user-friendly application. Additionally, the clarity of API specifications ensures that front-end developers have a clear understanding of the available functionalities. This understanding facilitates the creation of interfaces that align closely with user needs and expectations. As a result, users interact with a software solution that not only meets their requirements but also provides a seamless and enjoyable experience. 4. Reusability and Scalability One of the fundamental advantages of API-First Development is the promotion of code reusability. Well-designed APIs encapsulate specific functionalities, making them modular and easily transferable across different projects. This reusability not only saves development time but also ensures consistency and reliability in the implementation of common features. As organizations expand and develop a portfolio of applications, the reusability of APIs becomes a powerful asset. Components that have proven successful in one project can be seamlessly integrated into others, fostering scalability without sacrificing quality. This approach significantly reduces the time and resources required to develop new features or even entirely new applications. 5. Efficient Development Lifecycle API-First Development streamlines the software development lifecycle by providing a clear roadmap from the outset. The design-first approach ensures that teams have a well-defined plan before embarking on implementation, reducing the likelihood of misunderstandings or deviations from the intended functionality. The use of mock APIs in the early stages allows frontend developers to begin work on the user interface while backend development is in progress. This parallel development not only accelerates the overall timeline but also facilitates early testing and validation of the API design. As a result, the development lifecycle becomes more efficient, with teams working collaboratively and iteratively towards the common goal of delivering a robust and fully functional application. 6. Improved Testing and Debugging API-First Development promotes effective testing practices throughout the development process. The early creation of mock APIs enables comprehensive testing of API functionality before actual implementation begins. Tools like Postman or Swagger facilitate rigorous testing of various scenarios, input variations, and error handling. The clarity of API specifications enhances the precision of testing efforts. Test scenarios can be defined based on the expected behavior outlined in the API design document, ensuring that testing aligns closely with the intended functionality. This meticulous approach to testing not only identifies potential issues early in the development process but also contributes to the overall reliability and stability of the software. 7. Cost-Efficiency The benefits of API-First Development extend to cost-efficiency in various aspects of the software development lifecycle. The collaborative and iterative nature of the approach reduces the likelihood of rework, mitigating the costs associated with fixing misunderstandings or misalignments between development teams. Additionally, the reusability of well-designed APIs minimizes the effort required to implement common functionalities across multiple projects. Organizations can leverage existing components, reducing development time and costs associated with building features from scratch. This cost-effective approach positions API-First Development as a strategic investment with long-term benefits for organizations of all sizes. Implementing API-First Development Implementing API-First Development involves a series of strategic steps to ensure a seamless and efficient development process. Let’s delve into each of these steps in detail. API Design Define Clear Objectives: Start by clearly defining the objectives of your API. Understand the specific functionalities it needs to provide and how it fits into the larger architecture of your application. This initial step sets the foundation for the entire design process. Use API Design Tools: Leverage API design tools such as OpenAPI Specification (OAS) or RAML to create a detailed blueprint of your API. These tools allow you to define endpoints, request-response formats, authentication mechanisms, and other crucial details. This design document becomes a collaborative reference for both backend and frontend teams. Foster Collaboration: API design is a collaborative effort. Involve key stakeholders, including backend developers, frontend developers, and system architects, in the design process. This collaborative approach ensures that the API meets the needs of all parties involved and prevents misunderstandings later in the development process. Mocking and Testing Create Mock APIs: Once the API design is finalized, create mock APIs to simulate the behavior of the actual services. Mocking allows frontend developers to start working on the user interface without waiting for the backend implementation. It also serves as an early testing phase to identify any discrepancies between design and implementation. Test for Various Scenarios: Use tools like Postman or Swagger to test your mock APIs rigorously. Verify different scenarios, input variations, and error handling to ensure that the API behaves as expected. Early testing is crucial for identifying and addressing potential issues before they escalate. Gather Feedback: Encourage stakeholders, including developers and product managers, to provide feedback on the mock APIs. This iterative feedback loop ensures that any discrepancies or improvements are addressed early in the development process, reducing the likelihood of costly changes later on. Parallel Development Backend Development: With the API design and mock APIs in place, backend development can commence. Backend developers can focus on implementing the core functionalities of the API, ensuring that it aligns with the predefined design. Continuous communication with the front-end team is essential to address any emerging questions or challenges. Frontend Development: Simultaneously, frontend developers can start working on the user interface based on the mock APIs. This parallel development approach accelerates the overall project timeline, allowing different teams to progress simultaneously. The well-defined API specifications serve as a clear guideline for frontend developers, reducing dependencies on backend implementation details. Regular Sync Meetings: Facilitate regular sync meetings between backend and frontend teams to ensure alignment and address any integration challenges. These meetings foster open communication, allowing teams to share progress, discuss potential roadblocks, and make adjustments based on evolving requirements. Continuous Monitoring and Iteration Performance Monitoring: Once the API is implemented, continuously monitor its performance. Utilize monitoring tools to track response times, error rates, and overall reliability. Identify any performance bottlenecks and address them promptly to maintain a high-quality user experience. User Feedback: Gather feedback from end-users regarding the functionality and performance of the application. This user-centric approach provides valuable insights into how the API performs in real-world scenarios. Address user feedback through iterative updates, ensuring that the software remains responsive to evolving needs. Iterative Updates: API-First Development is inherently iterative. Based on monitoring data, user feedback, and evolving requirements, make iterative updates to the API design and implementation. This continuous improvement process ensures that the software remains adaptable to changing circumstances and provides a foundation for future enhancements. Testing in API-First Development: Ensuring Reliability and Functionality Testing is a critical component of API-First Development, ensuring that APIs are reliable, functional, and secure. This section explores various testing strategies to validate the robustness of APIs throughout the development lifecycle. Unit Testing Endpoint Testing: Conduct unit tests for individual API endpoints to ensure that they produce the expected output. Verify that each endpoint handles different input scenarios and responds appropriately. Data Validation: Validate data input and output to ensure that the API processes information correctly. Unit tests should cover various data types, ensuring that the API can handle diverse data sets reliably. Error Handling: Test the API’s error handling mechanisms by intentionally triggering errors. Ensure that error responses are clear, informative, and follow consistent patterns. Effective error handling contributes to the overall reliability of the API. Integration Testing Component Interaction: Validate the interaction between different components of the system through integration testing. Ensure that the API seamlessly integrates with databases, external services, and other dependencies. Integration testing identifies any issues arising from the collaboration of multiple components. Endpoint Integration: Test the integration of various endpoints to verify that they work together as expected. Integration testing is crucial for identifying any inconsistencies in the communication between different parts of the system. It ensures a cohesive flow of data and functionalities across the entire API. Dependency Testing: Verify the API’s dependencies, including external services and third-party integrations. Ensure that the API behaves as expected when interacting with these dependencies. Dependency testing helps preemptively address compatibility issues. Performance Testing Load Testing: Assess the responsiveness and scalability of APIs under various load conditions. Load testing helps identify performance bottlenecks and ensures that the API can handle the expected user load. It provides insights into the API’s capacity and helps optimize its performance. Stress Testing: Subject the API to stress testing to evaluate its stability under extreme conditions. Identify the breaking points and implement measures to enhance the overall robustness of the system. Stress testing helps uncover vulnerabilities that may only manifest under intense usage scenarios. Endurance Testing: Evaluate the API’s ability to sustain prolonged periods of usage. Endurance testing helps identify issues related to resource leaks, memory management, and other factors that may affect long-term reliability. It ensures the API’s stability over extended operational durations. Security Testing Authentication and Authorization: Verify that authentication and authorization mechanisms are robust. Security testing ensures that APIs are resistant to potential vulnerabilities, protecting sensitive data and user privacy. Test for common security threats, such as injection attacks, and implement measures to mitigate risks. Data Encryption: Ensure that data transmitted through the API is encrypted to maintain confidentiality. Security testing helps identify and address any weaknesses in data protection measures. Assess the effectiveness of encryption protocols and make adjustments as necessary. API Token Security: If the API uses tokens for authentication, conduct security testing to validate the strength of token-based security. Ensure that tokens are securely generated, transmitted, and validated to prevent unauthorized access. API token security is a crucial aspect of protecting API endpoints. Compliance Testing: Depending on the industry and regulatory requirements, conduct compliance testing to ensure that the API adheres to relevant standards and guidelines. Compliance testing helps mitigate legal risks and ensures that the API aligns with industry best practices. Documentation Maintain Up-to-Date Documentation: Continuously update the documentation to reflect changes in the API. Well-maintained documentation serves as a reference for developers, reducing the learning curve for new team members and external collaborators. Interactive Documentation: Consider using tools that generate interactive documentation from API specifications. Interactive documentation allows developers to explore and test API endpoints directly from the documentation, enhancing the overall developer experience. Code Samples: Include code samples and usage examples in the documentation to assist developers in implementing and integrating with the API. Code samples provide practical insights into how to interact with different endpoints and handle various scenarios. The Future of Software Development As technology continues to advance, API-First Development is poised to become even more integral in shaping the future of software development. The following trends and considerations highlight the evolving landscape and the pivotal role that API-First Development will play: Proliferation of Microservices Architecture Microservices architecture, characterized by the decomposition of applications into small, independently deployable services, has gained immense popularity. API-First Development aligns seamlessly with this architectural paradigm, as APIs serve as the communication layer between microservices. The modular nature of APIs facilitates the creation, deployment, and scaling of microservices, enabling organizations to build flexible and scalable systems. Rise of Serverless Computing Serverless computing, where applications run in a cloud environment without the need for managing servers, is reshaping how software is developed and deployed. API-First Development is well-suited for serverless architectures, as APIs define the interactions between serverless functions. By prioritizing API design, developers can ensure efficient communication between serverless components, leading to more agile and scalable applications. Emphasis on Cross-Platform Development The demand for applications that seamlessly operate across diverse platforms continues to grow. API-First Development supports cross-platform development by providing a standardized interface for different clients, be it web browsers, mobile devices, or IoT devices. This interoperability enhances the user experience and simplifies the development and maintenance of applications in a multi-platform landscape. Integration With Emerging Technologies Emerging technologies such as artificial intelligence (AI), machine learning (ML), and the Internet of Things (IoT) are becoming integral parts of modern applications. API-First Development facilitates the integration of these technologies by defining clear and standardized interfaces. APIs act as the bridge, allowing applications to leverage the capabilities of emerging technologies without overhauling the entire system. Evolving Security Practices As the digital landscape evolves, so do the challenges related to cybersecurity. API-First Development places a strong emphasis on security, and future developments will likely see an even greater focus on enhancing API security practices. This includes the adoption of advanced authentication mechanisms, encryption standards, and proactive measures to address emerging security threats. Continued Embrace of DevOps Culture The collaboration between development and operations teams, commonly known as DevOps, remains a cornerstone of efficient software development. API-First Development inherently supports DevOps practices by promoting collaboration, automation, and continuous integration. The future of software development will see an even deeper integration of API-first principles with DevOps, streamlining the entire development lifecycle. Democratization of Development API-First Development contributes to the democratization of development by enabling teams with diverse skill sets to work cohesively. Frontend and backend developers, as well as specialists in different domains, can collaborate effectively through well-defined APIs. This democratization trend will likely continue, allowing more stakeholders to participate meaningfully in the software development process. Expansion of API Marketplaces API marketplaces, where organizations can discover, consume, and contribute APIs, are on the rise. API-First Development aligns with the concept of API marketplaces by emphasizing the importance of well-designed and documented APIs. In the future, we can expect to see an expansion of these marketplaces, fostering a global ecosystem of reusable APIs that accelerate development across industries. Conclusion To summarize, API-First Development represents a paradigm shift in how software is designed and created. Organizations that prioritize APIs as the fundamental building blocks of applications may drive creativity, agility, and interoperability, resulting in robust and scalable software solutions that match the needs of today’s dynamic digital world. The future of software development is inextricably linked to the ongoing progress of API-first concepts. As we embrace microservices, serverless computing, cross-platform development, and the integration of future technologies, API-First Development will be critical in changing the software landscape. The emphasis on security, the collaborative DevOps culture, and the democratization of development all contribute to the long-term usefulness of API-First principles. As API markets grow and provide a varied range of reusable APIs, the development process will become more efficient and collaborative. APIs will play a critical role in the future of software development, allowing for seamless integration, creativity, and adaptation. Adopting rigorous testing procedures assures API stability and adds to the overall success of the API-First strategy, laying the way for a future in which software development is a dynamic and collaborative journey rather than a one-time activity.
As a quick recap, in Part 1: We built a simple gRPC service for managing topics and messages in a chat service (like a very simple version of Zulip, Slack, or Teams). gRPC provided a very easy way to represent the services and operations of this app. We were able to serve (a very rudimentary implementation) from localhost on an arbitrary port (9000 by default) on a custom TCP protocol. We were able to call the methods on these services both via a CLI utility (grpc_cli) as well as through generated clients (via tests). The advantage of this approach is that any app/site/service can access this running server via a client (we could also generate JS or Swift or Java clients to make these calls in the respective environments). At a high level, the downsides to this approach to this are: Network access: Usually, a network request (from an app or a browser client to this service) has to traverse several networks over the internet. Most networks are secured by firewalls that only permit access to specific ports and protocols (80:http, 443:https), and having this custom port (and protocol) whitelisted on every firewall along the way may not be tractable. Discomfort with non-standard tools: Familiarity and comfort with gRPC are still nascent outside the service-building community. For most service consumers, few things are easier and more accessible than HTTP-based tools (cURL, HTTPie, Postman, etc). Similarly, other enterprises/organizations are used to APIs exposed as RESTful endpoints, so having to build/integrate non-HTTP clients imposes a learning curve. Use a Familiar Cover: gRPC-Gateway We can have the best of both worlds by enacting a proxy in front of our service that translates gRPC to/from the familiar REST/HTTP to/from the outside world. Given the amazing ecosystem of plugins in gRPC, just such a plugin exists — the gRPC-Gateway. The repo itself contains a very in-depth set of examples and tutorials on how to integrate it into a service. In this guide, we shall apply it to our canonical chat service in small increments. A very high-level image (courtesy of gRPC-Gateway) shows the final wrapper architecture around our service: This approach has several benefits: Interoperability: Clients that need and only support HTTP(s) can now access our service with a familiar facade. Network support: Most corporate firewalls and networks rarely allow non-HTTP ports. With the gRPC-Gateway, this limitation can be eased as the services are now exposed via an HTTP proxy without any loss in translation. Client-side support: Today, several client-side libraries already support and enable REST, HTTP, and WebSocket communication with servers. Using the gRPC-Gateway, these existing tools (e.g., cURL, HTTPie, postman) can be used as is. Since no custom protocol is exposed beyond the gRPC-Gateway, complexity (for implementing clients for custom protocols) is eliminated (e.g., no need to implement a gRPC generator for Kotlin or Swift to support Android or Swift). Scalability: Standard HTTP load balancing techniques can be applied by placing a load-balancer in front of the gRPC-Gateway to distribute requests across multiple gRPC service hosts. Building a protocol/service-specific load balancer is not an easy or rewarding task. Overview You might have already guessed: protoc plugins again come to the rescue. In our service's Makefile (see Part 1), we generated messages and service stubs for Go using the protoc-gen-go plugin: protoc --go_out=$OUT_DIR --go_opt=paths=source_relative \ --go-grpc_out=$OUT_DIR --go-grpc_opt=paths=source_relative \ --proto_path=$PROTO_DIR \ $PROTO_DIR/onehub/v1/*.proto A Brief Introduction to Plugins The magic of the protoc plugin is that it does not perform any generation on its own but orchestrates plugins by passing the parsed Abstract Syntax Tree (AST) across plugins. This is illustrated below: Step 0: Input files (in the above case, onehub/v1/*.proto) are passed to the protoc plugin. Step 1: The protoc tool first parses and validates all proto files. Step 2:protoc then invokes each plugin in its list command line arguments in turn by passing a serialized version of all the proto files it has parsed into an AST. Step 3: Each proto plugin (in this case, go and go-grpcreads this serialized AST via its stdin. The plugin processes/analyzes these AST representations and generates file artifacts. Note that there does not need to be a 1:1 correspondence between input files (e.g., A.proto, B.proto, C.proto) and the output file artifacts it generates. For example, the plugin may create a "single" unified file artifact encompassing all the information in all the input protos. The plugin writes out the generated file artifacts onto its stdout. Step 4: protoc tool captures the plugin's stdout and for each generated file artifact, serializes it onto disk. Questions How does protoc know which plugins to invoke? Any command line argument to protoc in the format --<pluginname>_out is a plugin indicator with the name "pluginname". In the above example, protoc would have encountered two plugins: go and go-grpc. Where does protoc find the plugin? protoc uses a convention of finding an executable with the name protoc-gen-<pluginname>. This executable must be found in the folders in the $PATH variable. Since plugins are just plain executables these can be written in any language. How can I serialize/deserialize the AST? The wire format for the AST is not needed. protoc has libraries (in several languages) that can be included by the executables that can deserialize ASTs from stdin and serialize generated file artifacts onto stdout. Setup As you may have guessed (again), our plugins will also need to be installed before they can be invoked by protoc. We shall install the gRPC-Gateway plugins. For a detailed set of instructions, follow the gRPC-Gateway installation setup. Briefly: go get \ github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \ github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \ google.golang.org/protobuf/cmd/protoc-gen-go \ google.golang.org/grpc/cmd/protoc-gen-go-grpc # Install after the get is required go install \ github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \ github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \ google.golang.org/protobuf/cmd/protoc-gen-go \ google.golang.org/grpc/cmd/protoc-gen-go-grpc This will install the following four plugins in your $GOBIN folder: protoc-gen-grpc-gateway - The GRPC Gateway generator protoc-gen-openapiv2 - Swagger/OpenAPI spec generator protoc-gen-go - The Go protobuf protoc-gen-go-grpc - Go gRPC server stub and client generator Make sure that your "GOBIN" folder is in your PATH. Add Makefile Targets Assuming you are using the example from Part 1, add an extra target to the Makefile: gwprotos: echo "Generating gRPC Gateway bindings and OpenAPI spec" protoc -I . --grpc-gateway_out $(OUT_DIR) \ --grpc-gateway_opt logtostderr=true \ --grpc-gateway_opt paths=source_relative \ --grpc-gateway_opt generate_unbound_methods=true \ --proto_path=$(PROTO_DIR)/onehub/v1/ \ $(PROTO_DIR)/onehub/v1/*.proto Notice how the parameter types are similar to one in Part 1 (when we were generating go bindings). For each file X.proto, just like the go and go-grpc plugin, an X.pb.gw.go file is created that contains the HTTP bindings for our service. Customizing the Generated HTTP Bindings In the previous sections .pb.gw.go files were created containing default HTTP bindings of our respective services and methods. This is because we had not provided any URL bindings, HTTP verbs (GET, POST, etc.), or parameter mappings. We shall address that shortcoming now by adding custom HTTP annotations to the service's definition. While all our services have a similar structure, we will look at the Topic service for its HTTP annotations. Topic service with HTTP annotations: syntax = "proto3"; import "google/protobuf/field_mask.proto"; option go_package = "github.com/onehub/protos"; package onehub.v1; import "onehub/v1/models.proto"; import "google/api/annotations.proto"; /** * Service for operating on topics */ service TopicService { /** * Create a new sesssion */ rpc CreateTopic(CreateTopicRequest) returns (CreateTopicResponse) { option (google.api.http) = { post: "/v1/topics", body: "*", }; } /** * List all topics from a user. */ rpc ListTopics(ListTopicsRequest) returns (ListTopicsResponse) { option (google.api.http) = { get: "/v1/topics" }; } /** * Get a particular topic */ rpc GetTopic(GetTopicRequest) returns (GetTopicResponse) { option (google.api.http) = { get: "/v1/topics/{id=*}" }; } /** * Batch get multiple topics by ID */ rpc GetTopics(GetTopicsRequest) returns (GetTopicsResponse) { option (google.api.http) = { get: "/v1/topics:batchGet" }; } /** * Delete a particular topic */ rpc DeleteTopic(DeleteTopicRequest) returns (DeleteTopicResponse) { option (google.api.http) = { delete: "/v1/topics/{id=*}" }; } /** * Updates specific fields of a topic */ rpc UpdateTopic(UpdateTopicRequest) returns (UpdateTopicResponse) { option (google.api.http) = { patch: "/v1/topics/{topic.id=*}" body: "*" }; } } /** * Topic creation request object */ message CreateTopicRequest { /** * Topic being updated */ Topic topic = 1; } /** * Response of an topic creation. */ message CreateTopicResponse { /** * Topic being created */ Topic topic = 1; } /** * An topic search request. For now only paginations params are provided. */ message ListTopicsRequest { /** * Instead of an offset an abstract "page" key is provided that offers * an opaque "pointer" into some offset in a result set. */ string page_key = 1; /** * Number of results to return. */ int32 page_size = 2; } /** * Response of a topic search/listing. */ message ListTopicsResponse { /** * The list of topics found as part of this response. */ repeated Topic topics = 1; /** * The key/pointer string that subsequent List requests should pass to * continue the pagination. */ string next_page_key = 2; } /** * Request to get an topic. */ message GetTopicRequest { /** * ID of the topic to be fetched */ string id = 1; } /** * Topic get response */ message GetTopicResponse { Topic topic = 1; } /** * Request to batch get topics */ message GetTopicsRequest { /** * IDs of the topic to be fetched */ repeated string ids = 1; } /** * Topic batch-get response */ message GetTopicsResponse { map<string, Topic> topics = 1; } /** * Request to delete an topic. */ message DeleteTopicRequest { /** * ID of the topic to be deleted. */ string id = 1; } /** * Topic deletion response */ message DeleteTopicResponse { } /** * The request for (partially) updating an Topic. */ message UpdateTopicRequest { /** * Topic being updated */ Topic topic = 1; /** * Mask of fields being updated in this Topic to make partial changes. */ google.protobuf.FieldMask update_mask = 2; /** * IDs of users to be added to this topic. */ repeated string add_users = 3; /** * IDs of users to be removed from this topic. */ repeated string remove_users = 4; } /** * The request for (partially) updating an Topic. */ message UpdateTopicResponse { /** * Topic being updated */ Topic topic = 1; } Instead of having "empty" method definitions (e.g., rpc MethodName(ReqType) returns (RespType) {}), we are now seeing "annotations" being added inside methods. Any number of annotations can be added and each annotation is parsed by the protoc and passed to all the plugins invoked by it. There are tons of annotations that can be passed and this has a "bit of everything" in it. Back to the HTTP bindings: Typically an HTTP annotation has a method, a URL path (with bindings within { and }), and a marking to indicate what the body parameter maps to (for PUT and POST methods). For example, in the CreateTopic method, the method is a POST request to "v1/topic " with the body (*) corresponding to the JSON representation of the CreateTopicRequest message type; i.e., our request is expected to look like this: { "Topic": {... topic object...} } Naturally, the response object of this would be the JSON representation of the CreateTopicResponse message. The other examples in the topic service, as well as in the other services, are reasonably intuitive. Feel free to read through it to get any finer details. Before we are off to the next section implementing the proxy, we need to regenerate the pb.gw.go files to incorporate these new bindings: make all We will now see the following error: google/api/annotations.proto: File not found. topics.proto:8:1: Import "google/api/annotations.proto" was not found or had errors. Unfortunately, there is no "package manager" for protos at present. This void is being filled by an amazing tool: Buf.build (which will be the main topic in Part 3 of this series). In the meantime, we will resolve this by manually copying (shudder) http.proto and annotations.proto manually. So, our protos folder will have the following structure: protos ├── google │ └── api │ ├── annotations.proto │ └── http.proto └── onehub └── v1 └── topics.proto └── messages.proto └── ... However, we will follow a slightly different structure. Instead of copying files to the protos folder, we will create a vendors folder at the root and symlink to it from the protos folder (this symlinking will be taken care of by our Makefile). Our new folder structure is: onehub ├── Makefile ├── ... ├── vendors │ ├── google │ │ └── api │ │ ├── annotations.proto │ │ └── http.proto ├── proto └── google -> onehub/vendors/google └── onehub └── v1 └── topics.proto └── messages.proto └── ... Our updated Makefile is shown below. Makefile for HTTP bindings: # Some vars to detemrine go locations etc GOROOT=$(which go) GOPATH=$(HOME)/go GOBIN=$(GOPATH)/bin # Evaluates the abs path of the directory where this Makefile resides SRC_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST)))) # Where the protos exist PROTO_DIR:=$(SRC_DIR)/protos # where we want to generate server stubs, clients etc OUT_DIR:=$(SRC_DIR)/gen/go all: createdirs printenv goprotos gwprotos openapiv2 cleanvendors goprotos: echo "Generating GO bindings" protoc --go_out=$(OUT_DIR) --go_opt=paths=source_relative \ --go-grpc_out=$(OUT_DIR) --go-grpc_opt=paths=source_relative \ --proto_path=$(PROTO_DIR) \ $(PROTO_DIR)/onehub/v1/*.proto gwprotos: echo "Generating gRPC Gateway bindings and OpenAPI spec" protoc -I . --grpc-gateway_out $(OUT_DIR) \ --grpc-gateway_opt logtostderr=true \ --grpc-gateway_opt paths=source_relative \ --grpc-gateway_opt generate_unbound_methods=true \ --proto_path=$(PROTO_DIR) \ $(PROTO_DIR)/onehub/v1/*.proto openapiv2: echo "Generating OpenAPI specs" protoc -I . --openapiv2_out $(SRC_DIR)/gen/openapiv2 \ --openapiv2_opt logtostderr=true \ --openapiv2_opt generate_unbound_methods=true \ --openapiv2_opt allow_merge=true \ --openapiv2_opt merge_file_name=allservices \ --proto_path=$(PROTO_DIR) \ $(PROTO_DIR)/onehub/v1/*.proto printenv: @echo MAKEFILE_LIST=$(MAKEFILE_LIST) @echo SRC_DIR=$(SRC_DIR) @echo PROTO_DIR=$(PROTO_DIR) @echo OUT_DIR=$(OUT_DIR) @echo GOROOT=$(GOROOT) @echo GOPATH=$(GOPATH) @echo GOBIN=$(GOBIN) createdirs: rm -Rf $(OUT_DIR) mkdir -p $(OUT_DIR) mkdir -p $(SRC_DIR)/gen/openapiv2 cd $(PROTO_DIR) && ( \ if [ ! -d google ]; then ln -s $(SRC_DIR)/vendors/google . ; fi \ ) cleanvendors: rm -f $(PROTO_DIR)/google Now running Make should be error-free and result in the updated bindings in the .pb.gw.go files. Implementing the HTTP Gateway Proxy Lo and behold, we now have a "proxy" (in the .pw.gw.go files) that translates HTTP requests and converts them into gRPC requests. On the return path, gRPC responses are also translated to HTTP responses. What is now needed is a service that runs an HTTP server that continuously facilitates this translation. We have now added a startGatewayService method in cmd/server.go that now also starts an HTTP server to do all this back-and-forth translation: import ( ... // previous imports // new imports "context" "net/http" "github.com/grpc-ecosystem/grpc-gateway/v2/runtime" ) func startGatewayServer(grpc_addr string, gw_addr string) { ctx := context.Background() mux := runtime.NewServeMux() opts := []grpc.DialOption{grpc.WithInsecure()} // Register each server with the mux here if err := v1.RegisterTopicServiceHandlerFromEndpoint(ctx, mux, grpc_addr, opts); err != nil { log.Fatal(err) } if err := v1.RegisterMessageServiceHandlerFromEndpoint(ctx, mux, grpc_addr, opts); err != nil { log.Fatal(err) } http.ListenAndServe(gw_addr, mux) } func main() { flag.Parse() go startGRPCServer(*addr) startGatewayServer(*gw_addr, *addr) } In this implementation, we created a new runtime.ServeMux and registered each of our gRPC services' handlers using the v1.Register<ServiceName>HandlerFromEndpoint method. This method associates all of the URLs found in the <ServiceName> service's protos to this particular mux. Note how all these handlers are associated with the port on which the gRPC service is already running (port 9000 by default). Finally, the HTTP server is started on its own port (8080 by default). You might be wondering why we are using the NewServeMux in the github.com/grpc-ecosystem/grpc-gateway/v2/runtime module and not the version in the standard library's net/http module. This is because the grpc-gateway/v2/runtime module's ServeMux is customized to act specifically as a router for the underlying gRPC services it is fronting. It also accepts a list of ServeMuxOption (ServeMux handler) methods that act as a middleware for intercepting an HTTP call that is in the process of being converted to a gRPC message sent to the underlying gRPC service. These middleware can be used to set extra metadata needed by the gRPC service in a common way transparently. We will see more about this in a future post about gRPC interceptors in this demo service. Generating OpenAPI Specs Several API consumers seek OpenAPI specs that describe RESTful endpoints (methods, verbs, body payloads, etc). We can generate an OpenAPI spec file (previously Swagger files) that contains information about our service methods along with their HTTP bindings. Add another Makefile target: openapiv2: echo "Generating OpenAPI specs" protoc -I . --openapiv2_out $(SRC_DIR)/gen/openapiv2 \ --openapiv2_opt logtostderr=true \ --openapiv2_opt generate_unbound_methods=true \ --openapiv2_opt allow_merge=true \ --openapiv2_opt merge_file_name=allservices \ --proto_path=$(PROTO_DIR) \ $(PROTO_DIR)/onehub/v1/*.proto Like all other plugins, the openapiv2 plugin also generates one .swagger.json per .proto file. However, this changes the semantics of Swagger as each Swagger is treated as its own "endpoint." Whereas, in our case, what we really want is a single endpoint that fronts all the services. In order to contain a single "merged" Swagger file, we pass the allow_merge=true parameter to the above command. In addition, we also pass the name of the file to be generated (merge_file_name=allservices). This results in gen/openapiv2/allservices.swagger.json file that can be read, visualized, and tested with SwaggerUI. Start this new server, and you should see something like this: % onehub % go run cmd/server.go Starting grpc endpoint on :9000: Starting grpc gateway server on: :8080 The additional HTTP gateway is now running on port 8080, which we will query next. Testing It All Out Now, instead of making grpc_cli calls, we can issue HTTP calls via the ubiquitous curl command (also make sure you install jq for pretty printing your JSON output): Create a Topic % curl -s -d '{"topic": {"name": "First Topic", "creator_id": "user1"}' localhost:8080/v1/topics | jq { "topic": { "createdAt": "2023-07-07T20:53:31.629771Z", "updatedAt": "2023-07-07T20:53:31.629771Z", "id": "1", "creatorId": "user1", "name": "First Topic", "users": [] } } And another: % curl -s localhost:8080/v1/topics -d '{"topic": {"name": "Urgent topic", "creator_id": "user2", "users": ["user1", "user2", "user3"]}' | jq { "topic": { "createdAt": "2023-07-07T20:56:52.567691Z", "updatedAt": "2023-07-07T20:56:52.567691Z", "id": "2", "creatorId": "user2", "name": "Urgent topic", "users": [ "user1", "user2", "user3" ] } } List All Topics % curl -s localhost:8080/v1/topics | jq { "topics": [ { "createdAt": "2023-07-07T20:53:31.629771Z", "updatedAt": "2023-07-07T20:53:31.629771Z", "id": "1", "creatorId": "user1", "name": "First Topic", "users": [] }, { "createdAt": "2023-07-07T20:56:52.567691Z", "updatedAt": "2023-07-07T20:56:52.567691Z", "id": "2", "creatorId": "user2", "name": "Urgent topic", "users": [ "user1", "user2", "user3" ] } ], "nextPageKey": "" } Get Topics by IDs Here, "list" values (e.g., ids) are possibly by repeating them as query parameters: % curl -s "localhost:8080/v1/topics?ids=1&ids=2" | jq { "topics": [ { "createdAt": "2023-07-07T20:53:31.629771Z", "updatedAt": "2023-07-07T20:53:31.629771Z", "id": "1", "creatorId": "user1", "name": "First Topic", "users": [] }, { "createdAt": "2023-07-07T20:56:52.567691Z", "updatedAt": "2023-07-07T20:56:52.567691Z", "id": "2", "creatorId": "user2", "name": "Urgent topic", "users": [ "user1", "user2", "user3" ] } ], "nextPageKey": "" } Delete a Topic Followed by a Listing % curl -sX DELETE "localhost:8080/v1/topics/1" | jq {} % curl -s "localhost:8080/v1/topics" | jq { "topics": [ { "createdAt": "2023-07-07T20:56:52.567691Z", "updatedAt": "2023-07-07T20:56:52.567691Z", "id": "2", "creatorId": "user2", "name": "Urgent topic", "users": [ "user1", "user2", "user3" ] } ], "nextPageKey": "" } Best Practices Separation of Gateway and gRPC Endpoints In our example, we served the Gateway and gRPC services on their own addresses. Instead, we could have directly invoked the gRPC service methods, i.e., by directly creating NewTopicService(nil) and invoking methods on those. However, running these two services separately meant we could have other (internal) services directly access the gRPC service instead of going through the Gateway. This separation of concerns also meant these two services could be deployed separately (when on different hosts) instead of needing a full upgrade of the entire stack. HTTPS Instead of HTTP However in this example, the startGatewayServer method started an HTTP server, it is highly recommended to have the gateway over an HTTP server for security, preventing man-in-the-middle attacks, and protecting clients' data. Use of Authentication This example did not have any authentication built in. However, authentication (authn) and authorization (authz) are very important pillars of any service. The Gateway (and the gRPC service) are no exceptions to this. The use of middleware to handle authn and authz is critical to the gateway. Authentication can be applied with several mechanisms like OAuth2 and JWT to verify users before passing a request to the gRPC service. Alternatively, the tokens could be passed as metadata to the gRPC service, which can perform the validation before processing the request. The use of middleware in the Gateway (and interceptors in the gRPC service) will be shown in Part 4 of this series. Caching for Improved Performance Caching improves performance by avoiding database (or heavy) lookups of data that may be frequently accessed (and/or not often modified). The Gateway server can also employ cache responses from the gRPC service (with possible expiration timeouts) to reduce the load on the gRPC server and improve response times for clients. Note: Just like authentication, caching can also be performed at the gRPC server. However, this would not prevent excess calls that may otherwise have been prevented by the gateway service. Using Load Balancers While also applicable to gRPC servers, HTTP load balancers (in front of the Gateway) enable sharding to improve the scalability and reliability of our services, especially during high-traffic periods. Conclusion By adding a gRPC Gateway to your gRPC services and applying best practices, your services can now be exposed to clients using different platforms and protocols. Adhering to best practices also ensures reliability, security, and high performance. In this article, we have: Seen the benefits of wrapping our services with a Gateway service Added HTTP bindings to an existing set of services Learned the best practices for enacting Gateway services over your gRPC services In the next post, we will take a small detour and introduce a modern tool for managing gRPC plugins and making it easy to work with them.
In today’s digital landscape, the demand for scalable, high-performance databases that can seamlessly integrate with modern application frameworks is ever-growing. While reliable, traditional relational databases often need help keeping pace with the dynamic requirements of cloud-native applications. It has led to the rise of NoSQL databases, offering flexibility, scalability, and performance tailored to the demands of modern applications. This article delves into the synergy between Oracle NoSQL and Quarkus, exploring how their integration empowers Java developers to build robust, cloud-native applications efficiently. Oracle NoSQL is a distributed key-value database designed for real-time, low-latency data processing at scale. It provides a flexible data model, allowing developers to store and retrieve data without the constraints of a fixed schema. Leveraging a distributed architecture, Oracle NoSQL ensures high availability, fault tolerance, and horizontal scalability, making it ideal for handling large volumes of data in cloud environments. With features like automatic sharding, replication, and tunable consistency levels, Oracle NoSQL offers the performance and reliability required for modern applications across various industries. Quarkus is a Kubernetes-native Java framework tailored for GraalVM and OpenJDK HotSpot, optimized for fast startup time and low memory footprint. It embraces the principles of cloud-native development, offering seamless integration with popular containerization platforms and microservices architectures. Quarkus boosts developer productivity with its comprehensive ecosystem of extensions, enabling developers to build, test, and deploy Java applications with unparalleled efficiency. With its reactive programming model, support for imperative and reactive styles, and seamless integration with popular Java libraries, Quarkus empowers developers to create lightweight, scalable, and resilient applications for the cloud age. Why Oracle NoSQL and Quarkus Together Integrating Oracle NoSQL with Quarkus combines both technologies’ strengths, offering Java developers a powerful platform for building cloud-native applications. Here’s why they fit together seamlessly: Performance and Scalability Oracle NoSQL’s distributed architecture and Quarkus’ optimized runtime combine to deliver exceptional performance and scalability. Developers can scale their applications to handle growing workloads while maintaining low-latency response times. Developer Productivity Quarkus’ developer-friendly features, such as live coding, automatic hot reloads, and streamlined dependency management, complement Oracle NoSQL’s ease of use, allowing developers to focus on building innovative features rather than grappling with infrastructure complexities. Cloud-Native Integration Oracle NoSQL and Quarkus are designed for cloud-native environments, making them inherently compatible with modern deployment practices such as containerization, orchestration, and serverless computing. This compatibility ensures seamless integration with popular cloud platforms like AWS, Azure, and Google Cloud. Reactive Programming Quarkus’ support for reactive programming aligns well with the real-time, event-driven nature of Oracle NoSQL applications. Developers can leverage reactive paradigms to build highly responsive, resilient applications that handle asynchronous data streams and complex event processing effortlessly. In conclusion, integrating Oracle NoSQL with Quarkus offers Java developers a compelling solution for building high-performance, scalable applications in the cloud age. By leveraging both technologies’ strengths, developers can unlock new possibilities in data management, application performance, and developer productivity, ultimately driving innovation and value creation in the digital era. Executing the Database: Start Oracle NoSQL Database Before diving into the code, we must ensure an Oracle NoSQL instance is running. Docker provides a convenient way to run Oracle NoSQL in a container for local development. Here’s how you can start the Oracle NoSQL instance using Docker: Shell docker run -d --name oracle-instance -p 8080:8080 ghcr.io/oracle/nosql:latest-ce This command will pull the latest Oracle NoSQL Community Edition version from the GitHub Container Registry (ghcr.io) and start it as a Docker container named “oracle-instance” on port 8080. Generating Code Structure With Quarkus Quarkus simplifies the process of generating code with its intuitive UI. Follow these steps to generate the code structure for your Quarkus project: Open the Quarkus code generation tool in your web browser. Configure your project dependencies, extensions, and other settings as needed. Click the “Generate your application” button to download the generated project structure as a zip file. Configuring MicroProfile Config Properties Once your Quarkus project is generated, you must configure the MicroProfile Config properties to connect to the Oracle NoSQL database. Modify the microprofile-config.properties file in your project’s src/main/resources directory to include the database configuration and change the port to avoid conflicts: Properties files # Configure Oracle NoSQL Database jnosql.keyvalue.database=olympus jnosql.document.database=olympus jnosql.oracle.nosql.host=http://localhost:8080 # Change server port to avoid conflict server.port=8181 In this configuration: jnosql.keyvalue.database and jnosql.document.database specify the database names for key-value and document stores, respectively. jnosql.oracle.nosql.host specifies the host URL for connecting to the Oracle NoSQL database instance running locally on port 8080. server.port changes the Quarkus server port to 8181 to avoid conflicts with the Oracle NoSQL database on port 8080. With these configurations in place, your Quarkus application will be ready to connect seamlessly to the Oracle NoSQL database instance. You can now develop your application logic, leveraging the power of Quarkus and Oracle NoSQL to build robust, cloud-native solutions. We’ll need to configure the dependencies appropriately to integrate Eclipse JNoSQL with Oracle NoSQL driver into our Quarkus project. Since Quarkus avoids using reflection solutions, we’ll utilize the lite version of Eclipse JNoSQL, which allows us to generate the necessary source code without requiring the reflection engine at runtime. Here’s how you can configure the dependencies in your pom.xml file: XML <dependency> <groupId>org.eclipse.jnosql.databases</groupId> <artifactId>jnosql-oracle-nosql</artifactId> <version>${jnosql.version}</version> <exclusions> <exclusion> <groupId>org.eclipse.jnosql.mapping</groupId> <artifactId>jnosql-mapping-reflection</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.eclipse.jnosql.lite</groupId> <artifactId>mapping-lite-processor</artifactId> <version>${jnosql.version}</version> <scope>provided</scope> </dependency> In this configuration: <dependency> with groupId as org.eclipse.jnosql.databases and artifactId as jnosql-oracle-nosql includes the Oracle NoSQL driver for Eclipse JNoSQL. Inside this dependency, we have an <exclusions> block to exclude the jnosql-mapping-reflection artifact. It is to ensure that the reflection engine is not included in our project, as Quarkus does not utilize reflection solutions. <dependency> with groupId as org.eclipse.jnosql.lite and artifactId as mapping-lite-processor includes the lite version of the JNoSQL mapping processor. We specify <scope> as provided for the lite processor dependency. It means that the lite processor is provided during compilation to generate the necessary source code but is not included in the application’s runtime dependencies. With these dependencies configured, Eclipse JNoSQL will be seamlessly integrated into your Quarkus project, allowing you to leverage the power of Oracle NoSQL while adhering to Quarkus’ principles of avoiding reflection solutions. For getting to know more about Eclipse JNoSQL Lite, visit the Eclipse JNoSQL GitHub Repository. We’ll need to make a few adjustments to migrate the entity and repository from Java SE and Helidon to a Quarkus project. Here’s the modified code for your Beer entity, BeerRepository, and BeerResource classes: Beer Entity Java @Entity public class Beer { @Id public String id; @Column public String style; @Column public String hop; @Column public String malt; @Column public List<String> comments; // Public getters and setters are explicitly included for JNoSQL access } Transitioning from Helidon to Quarkus entails adapting our repository to Quarkus-compatible standards. In Quarkus, the repository can extend the BasicRepository interface, simplifying database interactions to basic operations. Java @Repository public interface BeerRepository extends BasicRepository<Beer, String> { } Our RESTful resource, BeerResource, undergoes minimal modification to align with Quarkus conventions. Here’s a breakdown of annotations and changes made: @Path("/beers"): Establishes the base path for beer-related endpoints @RequestScoped: Specifies the scope of the resource instance to a single HTTP request, ensuring isolation @Produces(MediaType.APPLICATION_JSON): Signals the production of JSON responses @Consumes(MediaType.APPLICATION_JSON): Indicates consumption of JSON requests @Inject: Facilitates dependency injection of BeerRepository, eliminating manual instantiation @Database(DatabaseType.DOCUMENT): Qualifies the database type for JNoSQL interactions, specifying the document-oriented nature of Oracle NoSQL; Qualifiers are pivotal in scenarios with multiple interface implementations, ensuring precise dependency resolution. Java @Path("/beers") @RequestScoped @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public class BeerResource { @Inject @Database(DatabaseType.DOCUMENT) BeerRepository beerRepository; @GET public List<Beer> getBeers() { return beerRepository.findAll(); } @GET @Path("{id}") public Beer getBeerById(@PathParam("id") String id) { return beerRepository.findById(id) .orElseThrow(() -> new WebApplicationException("Beer not found: " + id, Response.Status.NOT_FOUND)); } @PUT public void insert(Beer beer) { beerRepository.save(beer); } @DELETE @Path("{id}") public void delete(@PathParam("id") String id) { beerRepository.deleteById(id); } } Testing the Beer API After setting up the Quarkus project and integrating Oracle NoSQL with JNoSQL, it’s crucial to thoroughly test the API endpoints to ensure they function as expected. Below are the steps to execute and test the API using curl commands via the terminal: Step 1: Run the Quarkus Project Execute the following command in your terminal to start the Quarkus project in development mode: Shell ./mvnw compile quarkus:dev This command compiles the project and starts the Quarkus development server, allowing you to change your code and see the results in real-time. Step 2: Testing Endpoints With cURL You can use cURL, a command-line tool for making HTTP requests, to interact with the API endpoints. Below are the curl commands to test each endpoint: Get All Beers: Shell curl -X GET http://localhost:8181/beers This command retrieves all beers from the database and returns a JSON response containing the beer data. Get a Specific Beer by ID: Shell curl -X GET http://localhost:8181/beers/<beer_id> Replace <beer_id> with the actual ID of the beer you want to retrieve. This command fetches the beer with the specified ID from the database. Insert a New Beer: Shell curl --location --request PUT 'http://localhost:8181/beers' \ --header 'Content-Type: application/json' \ --data '{"style":"IPA", "hop":"Cascade", "malt":"Pale Ale", "comments":["Great beer!", "Highly recommended."]}' This command inserts a new beer into the database with the provided details (style, hop, malt, comments). Delete a Beer by ID: Shell curl -X DELETE http://localhost:8181/beers/<beer_id> Replace <beer_id> with the actual ID of the beer you want to delete. This command removes the beer with the specified ID from the database. By following these steps and executing the provided cURL commands, you can effectively test the functionality of your Beer API endpoints and ensure that they interact correctly with the Oracle NoSQL database. Conclusion In this article, we explored the seamless integration of Oracle NoSQL with Quarkus using JNoSQL, empowering developers to build robust and scalable applications in the cloud age. We began by understanding the fundamentals of Oracle NoSQL and Quarkus, recognizing their strengths in data management and cloud-native development. By migrating a Beer entity and repository from Java SE and Helidon to a Quarkus project, we demonstrated the simplicity of leveraging JNoSQL to interact with Oracle NoSQL databases. By adhering to Quarkus conventions and utilizing JNoSQL annotations, we ensured smooth integration and maintained data integrity throughout the migration process. Furthermore, we tested the API endpoints using cURL commands, validating the functionality of our Beer API and confirming its seamless interaction with the Oracle NoSQL database. For developers looking to delve deeper into the implementation details and explore the source code, the following reference provides a comprehensive source code repository: Quarkus with JNoSQL and Oracle NoSQL Source Code Reference By leveraging the capabilities of Quarkus, JNoSQL, and Oracle NoSQL, developers can unlock new possibilities in application development, enabling them to build high-performance, cloud-native solutions easily. In conclusion, integrating Oracle NoSQL with Quarkus empowers developers to embrace the cloud age, delivering innovative and scalable applications that meet the evolving demands of modern businesses.
Here's how to use AI and API Logic Server to create complete running systems in minutes: Use ChatGPT for Schema Automation: create a database schema from natural language. Use Open Source API Logic Server: create working software with one command. App Automation: a multi-page, multi-table admin app. API Automation: A JSON: API, crud for each table, with filtering, sorting, optimistic locking, and pagination. Customize the project with your IDE: Logic Automation using rules: declare spreadsheet-like rules in Python for multi-table derivations and constraints - 40X more concise than code. Use Python and standard libraries (Flask, SQLAlchemy) and debug in your IDE. Iterate your project: Revise your database design and logic. Integrate with B2B partners and internal systems. This process leverages your existing IT infrastructure: your IDE, GitHub, the cloud, your database… open source. Let's see how. 1. AI: Schema Automation You can use an existing database or create a new one with ChatGPT or your database tools. Use ChatGPT to generate SQL commands for database creation: Plain Text Create a sqlite database for customers, orders, items and product Hints: use autonum keys, allow nulls, Decimal types, foreign keys, no check constraints. Include a notes field for orders. Create a few rows of only customer and product data. Enforce the Check Credit requirement: Customer.Balance <= CreditLimit Customer.Balance = Sum(Order.AmountTotal where date shipped is null) Order.AmountTotal = Sum(Items.Amount) Items.Amount = Quantity * UnitPrice Store the Items.UnitPrice as a copy from Product.UnitPrice Note the hint above. As we've heard, "AI requires adult supervision." The hint was required to get the desired SQL. This creates standard SQL like this. Copy the generated SQL commands into a file, say, sample-ai.sql: Then, create the database: sqlite3 sample_ai.sqlite < sample_ai.sql 2. API Logic Server: Create Given a database (whether or not it's created from AI), API Logic Server creates an executable, customizable project with the following single command: $ ApiLogicServer create --project_name=sample_ai --db_url=sqlite:///sample_ai.sqlite This creates a project you can open with your IDE, such as VSCode (see below). The project is now ready to run; press F5. It reflects the automation provided by the create command: API Automation: a self-serve API ready for UI developers and; App Automation: an Admin app ready for Back Office Data Maintenance and Business User Collaboration. Let's explore the App and API Automation from the create command. App Automation App Automation means that ApiLogicServer create creates a multi-page, multi-table Admin App automatically. This does not consist of hundreds of lines of complex HTML and JavaScript; it's a simple yaml file that's easy to customize. Ready for business user collaboration,back-office data maintenance...in minutes. API Automation App Automation means that ApiLogicServer create creates a JSON: API automatically. Your API provides an endpoint for each table, with related data access, pagination, optimistic locking, filtering, and sorting. It would take days to months to create such an APIusing frameworks. UI App Developers can use the API to create custom apps immediately, using Swagger to design their API call and copying the URI into their JavaScript code. APIs are thus self-serve: no server coding is required. Custom App Dev is unblocked: Day 1. 3. Customize So, we have working software in minutes. It's running, but we really can't deploy it until we have logic and security, which brings us to customization. Projects are designed for customization, using standards: Python, frameworks (e.g., Flask, SQLAlchemy), and your IDE for code editing and debugging. Not only Python code but also Rules. Logic Automation Logic Automation means that you can declare spreadsheet-like rules using Python. Such logic maintains database integrity with multi-table derivations, constraints, and security. Rules are 40X more concise than traditional code and can be extended with Python. Rules are an executable design. Use your IDE (code completion, etc.) to replace 280 lines of code with the five spreadsheet-like rules below. Note they map exactly to our natural language design: 1. Debugging The screenshot above shows our logic declarations and how we debug them: Execution is paused at a breakpoint in the debugger, where we can examine the state and execute step by step. Note the logging for inserting an Item. Each line represents a rule firing and shows the complete state of the row. 2. Chaining: Multi-Table Transaction Automation Note that it's a Multi-Table Transaction, as indicated by the log indentation. This is because, like a spreadsheet, rules automatically chain, including across tables. 3. 40X More Concise The five spreadsheet-like rules represent the same logic as 200 lines of code, shown here. That's a remarkable 40X decrease in the backend half of the system. 4. Automatic Re-use The logic above, perhaps conceived for Place order, applies automatically to all transactions: deleting an order, changing items, moving an order to a new customer, etc. This reduces code and promotes quality (no missed corner cases). 5. Automatic Optimizations SQL overhead is minimized by pruning, and by eliminating expensive aggregate queries. These can result in orders of magnitude impact. This is because the rule engine is not based on a Rete algorithm but is highly optimized for transaction processing and integrated with the SQLAlchemy ORM (Object Relational Manager). 6. Transparent Rules are an executable design. Note they map exactly to our natural language design (shown in comments) readable by business users. This complements running screens to facilitate agile collaboration. Security Automation Security Automation means you activate login-access security and declare grants (using Python) to control row access for user roles. Here, we filter less active accounts for users with the sales role: Grant( on_entity = models.Customer, to_role = Roles.sales, filter = lambda : models.Customer.CreditLimit > 3000, filter_debug = "CreditLimit > 3000") 4. Iterate: Rules + Python So, we have completed our one-day project. The working screens and rules facilitate agile collaboration, which leads to agile iterations. Automation helps here, too: not only are spreadsheet-like rules 40X more concise, but they meaningfully simplify iterations and maintenance. Let’s explore this with two changes: Requirement 1: Green Discounts Plain Text Give a 10% discount for carbon-neutral products for 10 items or more. Requirement 2: Application Integration Plain Text Send new Orders to Shipping using a Kafka message. Enable B2B partners to place orders with a custom API. Revise Data Model In this example, a schema change was required to add the Product.CarbonNeutral column. This affects the ORM models, the API, etc. So, we want these updated but retain our customizations. This is supported using the ApiLogicServer rebuild-from-database command to update existing projects to a revised schema, preserving customizations. Iterate Logic: Add Python Here is our revised logic to apply the discount and send the Kafka message: Extend API We can also extend our API for our new B2BOrder endpoint using standard Python and Flask: Note: Kafka is not activated in this example. To explore a running Tutorial for application integration with running Kafka, click here. Notes on Iteration This illustrates some significant aspects of how logic supports iteration. Maintenance Automation Along with perhaps documentation, one of the tasks programmers most loathe is maintenance. That’s because it’s not about writing code, but archaeology; deciphering code someone else wrote, just so you can add four or five lines that’ll hopefully be called and function correctly. Logic Automation changes that with Maintenance Automation, which means: Rules automatically order their execution (and optimizations) based on system-discovered dependencies. Rules are automatically reused for all relevant transactions. So, to alter logic, you just “drop a new rule in the bucket,” and the system will ensure it’s called in the proper order and re-used over all the relevant Use Cases. Extensibility: With Python In the first case, we needed to do some if/else testing, and it was more convenient to add a dash of Python. While this is pretty simple Python as a 4GL, you have the full power of object-oriented Python and its many libraries. For example, our extended API leverages Flask and open-source libraries for Kafka messages. Rebuild: Logic Preserved Recall we were able to iterate the schema and use the ApiLogicServer rebuild-from-database command. This updates the existing project, preserving customizations. 5. Deploy API Logic Server provides scripts to create Docker images from your project. You can deploy these to the cloud or your local server. For more information, see here. Summary In minutes, you've used ChatGPT and API Logic Server to convert an idea into working software. It required only five rules and a few dozen lines of Python. The process is simple: Create the Schema with ChatGPT. Create the Project with ApiLogicServer. A Self-Serve API to unblock UI Developers: Day 1 An Admin App for Business User Collaboration: Day 1 Customize the project. With Rules: 40X more concise than code. With Python: for complete flexibility. Iterate the project in your IDE to implement new requirements. Prior customizations are preserved. It all works with standard tooling: Python, your IDE, and container-based deployment. You can execute the steps in this article with the detailed tutorial: click here.
Have you ever wondered what gives the cloud an edge over legacy technologies? When answering that question, the obvious but often overlooked aspect is the seamless integration of disparate systems, applications, and data sources. That's where Integration Platform as a Service (iPaaS) comes in. In today's complex IT landscape, your organization is faced with a myriad of applications, systems, and data sources, both on-premises and in the cloud. This means you face the challenge of connecting these disparate elements to enable seamless communication and data exchange. By providing a unified platform for integration, iPaaS enables you to break down data silos, automate workflows and unlock the full potential of your digital assets. Because of this, iPaaS is the unsung hero of modern enterprises. It can play a pivotal role in your digital transformation journey by streamlining and automating workflows. iPaaS also enables you to modernize legacy systems, enhance productivity, and create better experiences for your customers, users, and employees. Let's explore some key tenets of how iPaaS accelerates digital transformation: Rapid integration building: iPaaS reduces integration building time, allowing you to save resources and focus on other strategic initiatives. iPaaS accesses a list of pre-built connectors for various applications that accelerate integration and eliminate the need for custom coding to connect to a new application, service, or system. It also commonly offers a simple drag-and-drop user interface to ease the process of building the connections. Often, the user can start with a reusable template, which cuts down on development time. iPaaS can enhance the developer experience by providing robust API management tools, documentation, and testing environments. This promotes faster development and more reliable integrations. API management: iPaaS facilitates API management across their entire lifecycle — from designing to publishing, documenting, analyzing, and beyond — helping you access data faster with the necessary governance and control. iPaaS acts as a centralized hub for managing and monitoring APIs. iPaaS platforms offer robust security features like authentication, authorization, and encryption to protect sensitive data during API interactions. They also facilitate automated workflows for triggering API calls, handling data transformations, and responding to events. Modernizing legacy systems: Connecting your on-premises environment to the newer SaaS applications can significantly hinder the modernization process. iPaaS allows you to easily integrate cloud-based technologies with your legacy systems, giving you the best of both worlds and enabling a smooth transition to modern processes and technologies. iPaaS helps virtualize the entire environment, making it easy to replace or modernize your applications, irrespective of where they reside. Automation and efficiency: iPaaS helps automate repetitive complex processes and reduce manual touchpoints, ultimately improving operational efficiency and providing better customer experiences. For example, you can define a trigger in your workflow, and your functions will be automatically executed once the trigger is activated. The more you reduce human intervention, the better it gets at providing consistent results. Enabling agile operations: iPaaS enables you to rapidly integrate new applications and services at your organization as and when required, allowing you to remain agile and flexible in a quickly digitizing business market. Enhanced productivity with generative AI (Gen AI): Modern iPaaS solutions offer advanced Gen AI capabilities for rapid prototyping, error resolution, and FinOps optimization, helping you become more data-driven. It provides recommendations based on history, which makes it easier for a citizen integrator to get started without depending on the experts. Scalability and performance: One of the biggest reasons to use an integration platform on the cloud is its ability to scale up and down almost instantaneously to accommodate unpredictable workloads. Depending on the configuration you choose, you can ensure that performance does not dip even when the workload drastically increases. iPaaS enables you to scale your cloud systems seamlessly, supporting growing data volumes, increasing transaction volumes, and evolving business processes. Security and compliance: Last but not least, iPaaS helps you implement stringent security standards — including data encryption, access controls, and compliance certifications — to ensure the confidentiality, integrity, and availability of sensitive information. iPaaS as a Catalyst for Digital Transformation iPaaS is not just a technology solution; it's a strategic enabler of digital transformation as it empowers organizations to adapt, innovate, and thrive in the digital age. In that way, it acts as a catalyst for digital transformation. By embracing iPaaS, you can break down barriers, enhance collaboration, and create a connected ecosystem that drives growth and customer satisfaction.
This article is the first in a series of great takeaways on how to craft a well-designed REST API. As you read through the articles you will learn how to form an API in a way that is easy to expand, document, and use. The articles will not cover implementation details(eg. no code samples). Still, any suggestions given here will be possible to implement in any proper framework like Spring Boot, .Net Core MVC, NestJS, and others. Also, these series will only cover JSON as the envelope for data. REST APIs can use any data format they like, but it is outside of the scope of this series. The things you can expect to get from these articles are related to REST APIs with JSON as the data format and will cover these subjects: Naming conventions (This article) Recognizable design patterns Idempotency (Future article) Paging and sorting (Future article) Searching (Future article) Patching (Future article) Versioning (Future article) Authentication (Future article) Documentation (Future article) That's the overview. Let's get started looking at the naming conventions and let's start with Name Casing Conventions. Name Casing Conventions First, we will cover the use of casing when designing your REST API. As you probably know there are quite a few common casing conventions around, some of which are: PascalCase CamelCase Snake_case More than these exist, but these are the ones most common in REST APIs. For example, you will see that Stripe’s API uses snake_case, some of Amazon’s API uses PascalCase and Previsto’s API uses camelCase. As you traverse the many fine REST APIs out there to find inspiration, you will see that there is no de facto standard for which naming convention to use. However - I must emphasize that camelCase does offer some benefits for APIs that are meant to be used directly in a browser application (or other JavaScript clients) because it is the same casing that is standard in JavaScript. That means if the REST API uses camelCase in its JSON, then when that JSON is parsed by JavaScript the object fields will have the casing that fits. Example: Imagine the following is the data received from the server. JSON { "id": "anim-j95dcnjf3fjcde8nv", "type": "Cat", "name": "Garfield", "needsFood": true } Then in the client code, using this can be as simple as: JavaScript fetch('https://myshop.com/animals/anim-j95dcnjf3fjcde8nv') .then(response => response.json()) .then(animal => console.log(`${animal.name} needs food: ${animal.needsFood}`)); Using camelCase ensures that the fields on the object do not need to be translated into another case when parsed in JavaScript. They will immediately be in the correct case when parsed to JavaScript objects. That said, converting the case when parsing the JSON is also possible and often done if the REST API uses another case, but it is just more work on the client side. Plural or Singular Another question that often arises is whether to use plural or singular naming for the resources in the URLs, fx. animals or animal (Reflected in the URL as either https://myshop.com/animals or https://myshop.com/animal). It may look like a superfluous thing to consider, but in reality, making the right decision makes the API simpler to navigate. Why Some Prefer Singular Naming Some prefer the singular model here. They may argue that the Entity is a class called Animal in the backend which is singular. Therefore the resource should also use a singular naming of the resource, they say. The name of the resource thereby defines the type of data in the resource. That sounds like a legit reason, but it is not. Why Plural Naming Is Almost Always Correct According to the definition of REST on Wikipedia, a resource "encapsulates entities (e.g. files)". So the name of the resource is not the data type. It is the name of the container of entities - which usually is of the same type. Imagine for a second how you would write a piece of code that defines a "container" of entities. Java // Using plural would be the correect naming var animals = new ArrayList<Animal>(); animals.add(new Animal()); // Using singular would not var animal = new ArrayList<Animal>(); animal.add(new Animal()); You can look at the resource the same way. It is a container of entities just like Arrays, Lists, etc., and should be named as such. It will also ensure that the implementation in the client can reflect the resource names in its integration with the API. Consider this simplified integration as an example: Java class MyShopClient { animals() { return ...; } } var myshop = new MyshopClient(); myshop.animals.findAll(); // Request to https://myshop.com/animals Notice that the naming naturally reflects the naming of the resources on the server. Recognizable patterns like this make it easy for developers to figure out how to navigate the API. Recognizability Ensuring that naming and patterns are easy to recognize should be considered a quality of the API implementation. For example, using different name casing conventions or unnatural naming of resources makes it harder for the user of your API to figure out how to use it. But there is more you can do to ensure the quality of your API. Follow along in this series as we cover multiple aspects of how to craft a REST API that is easy to expand, document, and use. In the next article, we will go into depth with how to ensure Recognizable design patterns in your API.
John Vester
Staff Engineer,
Marqeta
Alexey Shepelev
Senior Full-stack Developer,
BetterUp
Saurabh Dashora
Founder,
ProgressiveCoder