Microservices Interview Questions- Part 4

Microservices Interview Questions- Part 4

Thinking of moving into a backend or cloud-focused role? Understanding microservices is key. Today’s tech companies are looking for developers who can build and manage distributed systems using modern practices. Microservices break down large applications into smaller parts that can run independently, making it easier to scale and maintain.

But explaining how this works in an interview can be tricky, especially if you’re switching from a non-tech or front-end role. This guide offers a curated list of microservices interview questions and answers to help you build confidence.

It covers real-world examples, common challenges, and tools like Docker, Kubernetes, and service registries. Whether you’re entering DevOps, cloud engineering, or backend development, this resource will help you speak clearly about microservices in interviews. With the right preparation, you’ll be able to show employers that you’re ready to take on complex system design problems with ease.

Answer:

  1. Microservices-based architectures allow continuous deployment & delivery. Being smaller in size, Microservices can be quickly built & delivered. It is a great fit in Agile development as businesses don’t have to wait to see the product. In Microservice architecture, the individual services can be built in different languages like Scala or Java. Also, different Microservices should be on different versions of the same language, like Java 8 & 9. Microservices are independent; thus, a Microservice can be scaled up & down independently of other Microservices.
  2. Microservices offers several tangible benefits over Monolithic architecture including an increase in flexibility, scalability, agility, & other significant advantages.
    • Independent components. All services need to be independently deployed & updated, which gives greater flexibility. A bug in a Microservice has only impacts a particular service & does not influence the whole application. Also, it is easier to add new features to a Microservice app than a Monolithic app.
    • Easier understanding. Split up into simpler & smaller components, a Microservice application is easier to manage & understand. Thus, you just focus on a particular service related to your business goal.
    • Better scalability. Each element in Microservices can be scaled independently. Thus, the entire process is more cost-effective & time-effective than Monolithic. Also, every Monolith app has limits in terms of scalability, so the more users you have, it will incur more problems. Therefore, several companies, end up rebuilding their Monolithic Architecture.

Answer:

  1. Spring Boot Actuator is an effective tool to monitor metrics, counters for an individual Microservice. However, if we have multiple Microservices, it is difficult to monitor them individually. We can utilize open-source tools like Grafana, Prometheous, or Kibana to monitor multiple Microservices. Prometheous is a pull-based monitoring tool, it contains metrics at provided intervals, displays them and also trigger alerts. Grafana or Kibana are dashboard tools used for visualizing & monitoring data. When there is a large number of Microservices with dependencies, we can use Dynatrace, AppDynamics, & New Relic that will draw dependencies amongst Microservices.
  2. The Spring Boot Controller is a potent tool that monitors statistics for individual Microservices. However, it is hard to track Microservices independently in case of multiple Microservices. We could use opensource tools like Kibana, Prometheus, Graphana for that purpose. Prometheous refers to a pull-based surveillance tool that contains, shows and activate notifications at a particular interval. Kibana or Grafana are dashboard instruments for surveillance & data visualization. As there are several dependencies in Microservices, , Dynatrace, AppDynamics, & New Relics can be applied, which pull dependence between Microservices.

Answer:

  1. SOA or Service-Oriented Architecture refers to a software design that comprise of software components providing service to meet the business processes requirements and software users’ requirements.
  2. Service-Oriented Architecture abbreviated as SOA is a way to make software components reusable through service interfaces. The interfaces use common communication standards in a way that they can be incorporated rapidly into new applications without performing deep integration each time.
  3. Service-Oriented Architecture or SOA is a software design style where services are provided to other components by the application components, via a communication protocol over the network. Its principles are independent of other technologies & vendors. In service-oriented architecture, a no. of services communicates to each other, the following two ways:
    • By passing data or;
    • Two or more services coordinating the activity.

Answer:

  1. Service discovery means finding a service provider’s network location. There are two service discovery patterns, as follows:
    • Client-side Discovery Pattern: In this, clients are responsible for determining the network locations of the available service instances & load balancing requests near them. The client will query the service registry & database of available service instances. Here, the client uses a load-balancing algorithm to choose the available service instances and then make a request.
    • Server-side Discovery Pattern: In this, clients send requests to a load balancer; & the load balancer, in turn, queries from the registry to determine which providers’ location to send.
  2. There are mainly two service discovery models as mentioned below:
    Client-Side Discovery- With the client-side discovery pattern, the client or API gateway making requests is responsible for identifying the service instance’s location & routing requests to it. The client first queries the service registry to identify the available instances service’ location; then, it determines the instances to use. The client can choose from a load balancing & simple round-robin approach or use a weighting system to decide the best instance to query at a particular time.
    Server-Side Discovery- With the server-side discovery, the client or API gateway passes a request like DNS name to the router. The router queries the service registry to identify the service’s available instances then applies load balancing logic to determine which one to use.
    As with the third-party registration, some deployment environments include this functionality as part of their service offering; it saves the effort of setting & maintaining additional components. In such cases, the client or API gateway needs to know the router’s location.

Answer:

  1. CQRS refers to Command & Query Responsibility Segregation. It is a pattern that segregates operations that will read data queries from operations that will update data commands through separate interfaces. It means the data models used for querying & updates are different. So, CQRS is a creation of two objects that were only one previously only. The separation occurs based on the methods if they are a command or a query.
  2. CQRS stands for (Command Query Responsibility Segregation), a pattern first described by Greg Young. It enables you to use different models to update information & read information. As the name suggests, we split an application into two parts: command-side & query-side.
    • Commands can change an object or entity’s state, also known as modifiers or mutators.
    • Queries return an entity’s state & don’t change anything. It is also called accessors.

Answer:

  1. DDD or Domain-Driven Design is a software design technique for understanding & solving complexity is DDD. It provides an avenue for facilitating highly cohesive system development through bounded contexts. Microservices encourages to focus service boundaries on the business domain boundaries. DDD & Microservices are used together for a service-oriented design & to reap the benefits of continuous delivery & integration.
  2. A domain-driven design focuses more on the core domain logic. It helps to identify complex designs on domain models. It constantly collaborates with domain experts to improve the model & resolve domain-centric issues.

Answer:

  1. Cross-browser testing is a website or application testing in multiple browsers to ensure it will work consistently without any dependencies or compromise in quality. The cross-browser testing applies to both web & mobile applications.
  2. Cross browser testing is a practice of ensuring that the websites and applications you create work consistently across an acceptable no. of web browsers. As web developers, you are responsible for ensuring that your projects work for all your users, regardless of the device, browser, or assistive tools they’re using. Think about:
    • Different browsers than the ones you regularly use, including some older browsers that people might still be using, which don’t support all the latest JavaScript & CSS features.
    • Different devices with unique capabilities, from the latest smartphones, tablets, smart TVs to cheap tablets & older featured phones that run a browser with limited capabilities.
  3. Cross-browser testing refers to a method of quality assurance for websites & applications across multiple browsers. In simple terms, it ensures your website’s quality on different screens. It is implemented to ensure a website’s design & functionality by testing it on a wide range of devices & OS being used in the market. Since the screen size, resolution, browser version, and OS versions all change, it contributes to how someone will view your website or application; this makes cross-browser testing indispensable for understanding different user experiences.

Answer:

  1. Component testing, also known as program or module testing, is done after unit testing. In component testing, objects are independently tested as a component without integrating with the other components, e.g., classes, modules, programs, & objects. The development team performs this testing.
  2. Component testing is a software testing type wherein the testing is performed separately on every individual component without integrating with other components. It is also called Module Testing when viewed from an architecture perspective.
  3. Component testing refers to a method where testing of each application’s component is separately done. Component testing is also called program & module testing. It identifies defects in the module & verifies the software functioning. Component testing is done in isolation from the rest of the system depending on the development life cycle model chosen for a particular application. In such cases, the missing software is replaced by Stubs & Drivers & simulates an interface between the software components in a simple way.

Answer:

  1. Following are the cons of Microservices:
    • Difficult to achieve strong consistency across the services.
    • ACID transactions do not span multiple processes.
    • It is difficult to trace & debug the issues in the distributed system.
    • Cultural differences across teams such as Dev & Ops working together in the same team.
    • Developers need to put additional efforts into implementing mechanism communication between the
    • It is tough to handle cases that span more than one service without using the distributed transactions & require communication & cooperation between different teams.
    • The architecture results in increased memory consumption.
    • Partitioning an application into Microservices is similar to art which is difficult.
  2. As Microservices heavily depend on messaging, one can face certain problems. Communication can be difficult without using automation & advanced agile methodologies. You need to introduce DevOps tools like APM tools, CI/CD servers, & configuration management platforms to manage the network. It is best suited for companies who already use those methods. However, the adoption of additional requirements can be challenging for smaller companies.

Answer:

  1. Zuul can be integrated with other Netflix services called Hystrix. It enables tolerance of several types of faults commonly present in Eureka. By tolerating different faults, service discovery is made easier within Microservices. One can use it to manage routing tables & effective load balancing across the system.
  2. Zuul can be easily integrated with other types of Netflix services like Hystrix. It is meant for tolerance of many types of faults commonly present in Eureka. By tolerating various faults, it makes service discovery easier within the realm of Microservices. One can use it to manage routing tables & load balancing across the system.

Answer:

  1. Docker helps in many ways for Microservices architecture:
    • In a Microservice architecture, different services are written in different languages. Thus, a developer will need to set up few services with its platform requirements & dependencies. It becomes difficult with the growing no. of services in the ecosystem. However, with services running inside a Docker container, it becomes very easy.
    • Services running inside a Docker container provide a similar setup across all the environments, including stage, production, & development.
    • Docker helps in scaling with container orchestration.
    • Docker facilitates upgrading the underlying language, which saves time & effort.
    • Docker can help to onboard engineers faster.
    • Docker reduces dependencies on the IT Teams for setting up & managing the different environments as required.
  2. With Docker, you can build applications independent of the host environment. As you have Microservices architecture, you can encapsulate each into Docker containers. Being lightweight & resource-isolated environments, Docker container can help you build, ship, maintain, & deploy your application.

Answer:

  1. There are different Spring Cloud annotations & configurations as follows:
    • @EnableEurekaServerannotation enables you to register Microservices to the spring cloud.
    • @EnableDiscoveryClient annotation allows you to query the Discovery server to find Microservices.
    • Spring provides an intelligent RestTemplate for service discovery & load balancing through @LoadBalanced annotation with the RestTemplate instance.
  2. Spring Boot Annotations are a type of metadata that provides program data. Annotations provide additional information about the program. It is not a part of an application that you develop & doesn’t directly affect the code operation they annotate or changes the compiled program’s action.
  3. Here are important Spring Cloud annotations:
    • @EnableConfigServer- This annotation turns an application into a server, which other applications can avail to get their configuration. It helps develop Microservices in Java, wherein you can afford to have a dedicated Java service for configuration.
    • @EnableEurekaServer- This annotation makes an application a Eureka discovery service, which other applications can utilize to locate services. It is yet another important step in developing Microservices in Java using Spring Cloud.
    • @EnableDiscoveryClient- This Spring Cloud annotation enables an application to register in the service discovery & finds other services through it.
    • @EnableCircuitBreaker- It configures Hystrix circuit breaker protocols. Using the Circuit Breaker pattern, you can enable a Microservice to continue to operate even when a related service fails, thus, preventing cascading failure & giving the failing service time to recover.

Answer:

  1. Some of the main features of API Gateway are:
    • Path Rewriting
    • Hystrix Circuit Breaker addition for resiliency
    • Spring Cloud Discovery & Client integration
    • Request Rate Limiting
  2. Amazon API Gateway has several features such as:
    • It supports both stateful (WebSocket) & stateless (HTTP& REST) APIs.
    • It is potent, flexible authenticationmechanisms, such as Access Management policies, AWS Identity, Amazon Cognito user pools & Lambda authorizer functions.
    • Canary release deploymentsfor safely rolling out changes.
    • CloudTraillogging & monitoring of API usage & changes.
    • CloudWatch access & execution logging, including the ability to set alarms.
    • It can deploy AWS CloudFormation templates to facilitate API creation.
    • It supports custom domain names.
    • Integration with AWS WAFto protect APIs against common web exploits.
    • Integration with AWS X-Rayto understand & triaging performance latencies.
  3. API Gateway has various integration features as given below:
    • Identity management: API Gateway integrates with the existing third-party (IM) Identity Management infrastructures to perform authorization & authentication of message traffic. API Gateway interoperates with the leading integration platforms & products like .NET, Microsoft,  IBM WebSphere, Oracle WebLogic, & SAP NetWeaver.
    • Scalability: API Gateway offers a highly scalable & flexible solution architecture. Administrators can use new API Gateway instances as per their needs and deploy different or same policies across API Gateway instances as required. It enables administrators to apply policies in their system at any point. One can distribute policy enforcement points around the network.
    • Pluggable pipeline: The API Gateway has an extensible internal message-handling pipeline that enables to easily add extra access control & content-filtering rules. Customers don’t have to wait for a full product release before receiving updates of support for the emerging standards & additional adapters.
    • REST APIs: API Gateway has a REST support feature that enables you to make enterprise application operations & data accessible using Web APIs. API Gateway streamlines REST-to-SOAP conversion. It exposes REST APIs that map to SOAP services & dynamically creates a SOAP request based on the REST API call.
    • Internationalization: API Gateway supports a large no. of character sets, international languages, & multi-byte message data.

Answer:

  1. Authentication: Before disclosing any confidential data/information, the authentication system checks & identifies a user. It is essential to preserve sensitive data for interfaces or systems where the user can access it. Thereon, the user claims a person’s or organization’s identity like password, username, fingerprint, or other credentials. The application layer deals with the authentication & non-repudiation problems. The inefficient method for authentication greatly impacts a service’s availability.
    Authorization: The authorization method is used to accurately determine which authorizations an authenticated user is granted. The authorization is only provided when a user identity is guaranteed before, then after checking the entries stored in the databases & tables, the user access list is established.
  2. Let’s check out the difference between Authentication & Authorization:
    S.NO Authentication Authorization
    1 In the authentication process, the identity of a user is checked for providing the system access. In authorization, a user’s authority is checked for providing resource access.
    2 In the authentication process, persons or users are verified. In the authorization process, persons or users are validated.
    3 Authentication is done before authorization. Authorization is done after authentication.
    4 It usually needs the user’s login details or credentials. It needs a user’s security or privilege levels.
    5 Authentication determines whether a person is a user or not. Authorization determines what permission do the user has.

Answer:

  1. Wiremock is a simulator for HTTP-based API that helps you to stay productive when your dependent API isn’t complete or does not exist.
  2. WireMock refers to a simulator for HTTP-based APIs. Some consider it as a mock server or service virtualization tool. It allows you to remain productive when an API you depend on isn’t complete or doesn’t exist. It supports testing of failure modes & edge cases that the real API may not reliably produce.

Answer:

  1. Good Microservices have the following aspects:
    1. Loose coupling: An individual Microservice knows a little or nothing about other services. It is much independent, so any change made into a Microservice does not require a change in the other Microservices.
    2. Highly cohesive: Being highly cohesive, each Microservice can independently provide a set of behavior.
    3. Bounded Context: Microservices act as the bounded context in a domain. Thus, it communicates with the rest of the domain through an interface for that bounded context.
    4. Business Capability: Microservices add the business capability to a system individually. It is like a small part of the big picture.
    5. Continuous Delivery:A good Microservice is built in a way so that it can be continuously deployed & integrated into production. It accelerates the speed of feature deployment to production.
    6. Backward Compatibility: Multiple versions of a Microservice can exist in the production. Therefore, any new version of a Microservice should provide backward compatibility to the clients.
    7. Stateless: Good Microservices keep a service stateless to ensure less dependency between the calls.
  2. Below are the characteristics of services applicable to Microservices:
    • Service contract: Just like SOA, Microservices are described by well-defined service contracts. REST & JSON are universally accepted for service communication in Microservices. Many techniques are used to define service contracts in the case of JSON/REST, such as WADL, JSON Schema, Swagger, and RAML.
    • Loose coupling: Microservices are loosely coupled & independent. In a majority of cases, Microservices accept an event as input & responds with another event. Messaging, REST & HTTP are commonly used for interaction between the Microservices.
    • Service abstraction: Service abstraction in Microservices is not an abstraction of service realization, but it also provides complete abstraction of all environment details & libraries.
    • Service reuse: Microservices provides reusable business services. These are accessible by desktop channels, mobile devices, other Microservices, & other systems.
    • Statelessness: Well-designed Microservices are stateless & share nothing with the other conversational state maintained by the services. If there is a need to maintain a state, they are maintained in the database or memory.
    • Services are discoverable: Microservices are discoverable, which means in a typical Microservices environment, the Microservices self-advertise to make themselves available for discovery. When any services die, they automatically take out themselves from the Microservices ecosystem.
    • Service interoperability: Services are interoperable since they use message exchange standards & standard protocols. REST/JSON is the most popular method for developing interoperability in Microservices. If further optimization is needed on communications, one can use other protocols such as Thrift, Protocol Buffers, Zero MQ, or Avro. However, the use of those protocols may limit the overall interoperability of services.

Answer:

  1. The following are some key points to remember during the integration of Microservices:
    • Technology Agnostic APIs: Developing Microservices in a technology-agnostic way can help in the integration of multiple Microservices. With time, the technology implementation may change, but the interface between the Microservices remains the same.
    • Breaking Changes: Each change in Microservice must not be the breaking change for a client. It is essential to reduce the impact of changes on the existing client so that existing clients do not have to change their code consistently to adapt to a Microservice changes.
    • Implementation Hiding: Each Microservice must hide its internal implementation details from one another. It helps in reducing the coupling between Microservices integrated for a common solution.
    • Simple to use: A Microservice should always be simple to use for the clients, so the integration points get simpler. It should enable clients to choose their technology stack.
  2. Here are a few important things to keep in mind during the integration of Microservices:
    1. Technology Agnostic APIs: Build Microservices in a technology-agnostic way; it will help integrate multiple Microservices. The technology implementation can get changed with time, but the interface between the Microservices remains the same.
    2. Breaking Changes: Every Microservice change should not become the breaking change for a client. One should minimize the impact of a change on the existing client, so the client doesn’t have to change the code to adapt to the Microservices changes.
    3. Implementation Hiding: Microservices should hide their internal implementation details from one another. It helps in minimizing the coupling between Microservices integrated for a common solution.
    4. Simple to use: A Microservice must be simple to use for the consumers, so the integration points are also simpler. It should facilitate clients to choose their technology stack.

Answer:

  1. Synchronous communication refers to a blocking call, wherein the client blocks himself from doing anything until the response comes back. In Asynchronous communication, the client moves ahead with work after making an Asynchronous call. Thus, the client is not blocked.
    In Synchronous communication, Microservices can provide an instant response regarding success or failure. So Synchronous service is beneficial & effective in real-time systems. In Asynchronous communication, a service reacts based on a response received in the future. Synchronous systems are also called request/response-based, while Asynchronous systems are known as event-based.
    Synchronous Microservices are not loosely coupled. Microservices can choose anything from Synchronous communication or Asynchronous communication depending on the business needs.
  2. Such decisions require knowledge about different aspects of the business system. Communication standards can be easily defined & are unchangeable regardless of the approach to architecture you implement. When it comes to communication styles, we can divide them into two parts & define whether a protocol is asynchronous or synchronous.
    • Synchronous: For web app communication, the HTTP protocol has been the communication standard for years, and it is no exception for Microservices as well. HTTP is a stateless, synchronous protocol with some drawbacks, but it is still very popular. In Synchronous Communication, the client sends the request & waits for the response from the service. By using this protocol, the client communicates asynchronously with the server, which means the thread is not blocked, and the response eventually reaches a callback.
    • Asynchronous: The client does not block a thread while waiting for the response. In most cases, such communication is realized with the messaging brokers. The message producer does not wait for the response, but it waits to acknowledge that the broker has received the message. Advanced Message Queuing Protocol or AMQP is the most popular protocol for this type of communication, supported by many cloud providers & operating systems. An Asynchronous messaging system is implemented in a one-to-one(queue) or one-to-many (topic) mode.

Answer:

  1. In Orchestration, you rely on the central system to control & call other Microservices to complete a task. On the other hand, in choreography, each Microservice works like the State Machine & reacts based on input from the other parts. Orchestration is a tightly coupled approach for Microservices integration whereas, choreography is a loosely coupled approach.
    Choreography-based systems are easier to change & relatively more flexible than Orchestration-based systems. Orchestration is often implemented through synchronous calls, while choreography is implemented through asynchronous calls. The synchronous calls are simpler compared to asynchronous communication.
  2. Choreography & Orchestration differ in the two main aspects of independence and control:
    • Choreography– Where Microservices work independently but in coordination with each other, using events or cues. Here the workflow or control of the saga is determined by a predefined set of events or cues.
    • Orchestration– In this, a conductor or orchestrator controls Microservices. It enables a centralized workflow or control of the saga. The orchestrator/conductor can be centralized for all the workflows or sagas or distributed as individual services for each workflow or saga. Thus, it provides different levels of independence to each Microservice.
  3. Choreography and Orchestration are modes of interaction in Microservices that differs in the following ways:
    • In orchestration, there is a controller or orchestrator that controls the interaction between the Microservices. It dictates the business logic control flow responsible for ensuring everything happens on cue. It follows the request/response paradigm.
    • In choreography, each Microservice works independently. It eliminates any hard dependencies between the Microservices; they are only loosely coupled through the shared events. Each service does its own thing & looks its interesting events. Choreography follows an event-driven paradigm.

Answer:

  1. Semantic Versioning is a kind of software versioning scheme that users understand easily. In the Semantic Versioning specification, each software version is specified in the Major.Minor.Patch format.
    1. Major version increment implies that the changes to the software are not backward compatible.
    2. Minor version increment implies that the new functionality is added, but the changes are still backward compatible.
    3. Patch version increment suggests that a fix to a bug is provided in the version.
    Semantic versioning is a preferred versioning scheme for Microservices release & development.
  2. Semantic versioning or SemVer is a type of versioning system that has a mounting growth in the last few years. Having a universal way of versioning software development projects. Semantic versioning is the best way to track what is going on with the software as new addons, plugins, extensions & libraries being built almost every day.
  3. Semantic versioning refers to a versioning scheme for using meaningful version numbers. It mainly deals with how API versions compare in terms of backward compatibility. Semantic versioning is meaningless without a well-defined model of how the API can be extended & evolve over time. It needs to be a part of an API design & documentation and managed as an essential aspect of the general API management approach.