Sunday, April 16, 2023

JSON Vs XML

Difference Between JSON and XML


  •  JSON object has a type whereas XML data is typeless.
  •  JSON does not provide namespace support while XML provides namespaces support.
  •  JSON has no display capabilities whereas XML offers the capability to display data.
  •  JSON is less secured whereas XML is more secure compared to JSON.
  •  JSON supports only UTF-8 encoding whereas XML supports various encoding formats.

 What is JSON?

JSON is a file format that uses human-readable text for storing and transmitting data objects containing attribute-value pairs and arrays. JSON is used to store information in an organized and easy-to-access manner. JSON stands for JavaScript Object Notation. It offers a human-readable collection of data that can be accessed logically.

What is XML?

XML is an extensible markup language that is designed to store data. It is popularly used for transferring data. It is case-sensitive. XML allows you to define markup elements and generate customized markup language. An element is a basic unit in the XML language. The extension of XML file is .xml.



Image Source: google.com, guru99.com

Friday, April 14, 2023

REST vs SOAP API: Differences Between Web Services

 The Key differences are:


  • SOAP stands for Simple Object Access Protocol whereas REST stands for Representational State Transfer.
  • SOAP is a protocol whereas REST is an architectural pattern.
  • SOAP uses service interfaces to expose its functionality to client applications while REST uses Uniform Service locators to access to the components on the hardware device.
  • SOAP needs more bandwidth for its usage whereas REST doesn’t need much bandwidth.
  • Comparing SOAP vs REST API, SOAP only works with XML formats whereas REST work with plain text, XML, HTML and JSON.
  • SOAP cannot make use of REST whereas REST can make use of SOAP.

What is SOAP?

SOAP is a protocol which was designed before REST and came into the picture. The main idea behind designing SOAP was to ensure that programs built on different platforms and programming languages could exchange data in an easy manner. 

What is REST?

REST was designed specifically for working with components such as media components, files, or even objects on a particular hardware device. Any web service that is defined on the principles of REST can be called a RestFul web service. A Restful service would use the normal HTTP verbs of GET, POST, PUT and DELETE for working with the required components. 



When to use REST?

One of the most highly debatable topics is when REST should be used or when to use SOAP while designing web services. Below are some of the key factors that determine when REST and SOAP API technology should be used for web services REST services should be used in the following instances

  • Limited resources and bandwidth – Since SOAP messages are heavier in content and consume a far greater bandwidth, REST should be used in instances where network bandwidth is a constraint.
  • Statelessness – If there is no need to maintain a state of information from one request to another then REST should be used. If you need a proper information flow wherein some information from one request needs to flow into another then SOAP is more suited for that purpose. We can take the example of any online purchasing site. These sites normally need the user first to add items which need to be purchased to a cart. All of the cart items are then transferred to the payment page in order to complete the purchase. This is an example of an application which needs the state feature. The state of the cart items needs to be transferred to the payment page for further processing.
  • Caching – If there is a need to cache a lot of requests then REST is the perfect solution. At times, clients could request for the same resource multiple times. This can increase the number of requests which are sent to the server. By implementing a cache, the most frequent queries results can be stored in an intermediate location. So whenever the client requests for a resource, it will first check the cache. If the resources exist then, it will not proceed to the server. So caching can help in minimizing the amount of trips which are made to the web server.
  • Ease of coding – Coding REST Services and subsequent implementation is far easier than SOAP. So if a quick win solution is required for web services, then REST is the way to go.

When to use SOAP?

SOAP should be used in the following instances

  1. Asynchronous processing and subsequent invocation – if there is a requirement that the client needs a guaranteed level of reliability and security then the new SOAP standard of SOAP 1.2 provides a lot of additional features, especially when it comes to security.
  2. A Formal means of communication – if both the client and server have an agreement on the exchange format then SOAP 1.2 gives the rigid specifications for this type of interaction. An example is an online purchasing site in which users add items to a cart before the payment is made. Let’s assume we have a web service that does the final payment. There can be a firm agreement that the web service will only accept the cart item name, unit price, and quantity. If such a scenario exists then, it’s always better to use the SOAP protocol.
  3. Stateful operations – if the application has a requirement that state needs to be maintained from one request to another, then the SOAP 1.2 standard provides the WS* structure to support such requirements.


Challenges in REST API

  1. Lack of Security – REST does not impose any sort of security like SOAP. This is why REST is very appropriate for public available URL’s, but when it comes down to confidential data being passed between the client and the server, REST is the worst mechanism to be used for web services.
  2. Lack of state – Most web applications require a stateful mechanism. For example, if you had a purchasing site which had the mechanism of having a shopping cart, it is required to know the number of items in the shopping cart before the actual purchase is made. Unfortunately, the burden of maintaining this state lies with the client, which just makes the client application heavier and difficult to maintain.

Challenges in SOAP API

API is known as the Application Programming Interface and is offered by both the client and the server. In the client world, this is offered by the browser whereas in the server world it’s what is provided by the web service which can either be SOAP or REST.

Challenges with the SOAP API

  1. WSDL file – One of the key challenges of the SOAP API is the WSDL document itself. The WSDL document is what tells the client of all the operations that can be performed by the web service. The WSDL document will contain all information such as the data types being used in the SOAP messages and what all operations are available via the web service. The below code snippet is just part of a sample WSDL file.
 <?xml version="1.0"?>
<definitions name="Tutorial"             
	targetNamespace=http://demo.guru99.com/Tutorial.wsdl             
	xmlns:tns=http://demo.guru99.com/Tutorial.wsdl             
	xmlns:xsd1=http://demo.guru99.com/Tutorial.xsd            
	xmlns:soap=http://schemas.xmlsoap.org/wsdl/soap/
	xmlns="http://schemas.xmlsoap.org/wsdl/"> 

	<types>  
		<schema targetNamespace=http://Demo.guru99.com/Tutorial.xsd
		xmlns="http://www.w3.org/2000/10/XMLSchema">

      	<element name="TutorialNameRequest">    
			<complexType>          
				<all>           
					<element name="TutorialName" type="string"/>         
				</all>       
			</complexType>    
		</element>     
	<element name="TutorialIDRequest">        
		<complexType>          
			<all>           
				<element name="TutorialID" type="number"/>         
			</all>       
		</complexType>      
	</element>   
	</schema>  
</types>	
As per the above WSDL file, we have an element called “TutorialName” which is of the type
String which is part of the element
TutorialNameRequest.

Now, suppose if the WSDL file were to change as per the business requirements and the
TutorialName has to become TutorialDescription.

This would mean that all the clients who are currently connecting to this web service
would then need to make this corresponding change in their code to accommodate the
change in the WSDL file.

This shows the biggest challenge of the WSDL file which is the tight contract between the
client and the server and that one change could cause a large impact, on the whole,
client applications.

2. Document size – The other key challenge is the size of the SOAP messages which get
transferred from the client to the server. Because of the large messages, using SOAP
in places where bandwidth is a constraint can be a big issue.

Sources: Google, guru99.com

Wednesday, April 5, 2023

What is CloudHub 2.0 .? CloudHub 2.0 Vs CloudHub 1.0 Vs Runtime Fabric (RTF)

CloudHub 2.0 Overview

 CloudHub 2.0 is a fully managed, containerized integration platform as a service (iPaaS) where you can deploy APIs and integrations as lightweight containers in the cloud.

Why Deploy on CloudHub 2.0?

CloudHub 2.0:

  •     Provides for deployments across 12 regions globally.
  •     Dynamically scales infrastructure and built-in services up or down to support elastic transaction volumes.
  •     Builds in security policies, protecting your services and sensitive data with encrypted secrets, firewall controls, and restricted shell access.
  •     Encrypts certificates, passwords, and other sensitive configuration data at rest and in transit within Anypoint Platform.
  •     Provides a standardized isolation boundary by running each Mule instance and service as a separate container.
CloudHub 2.0 architecture comprises two major components: Anypoint Platform services and shared global regions. These two components, along with Anypoint Runtime Manager through which you access them, work together to run your integration applications.


1 Integration Applications : Applications that you create and deploy to CloudHub 2.0 to perform integration logic for your business
2 Runtime Manager : User interface that enables you to deploy and monitor integrations, and configure your account
3 Platform Services: Shared CloudHub 2.0 platform services and APIs, which include Anypoint Monitoring, alerting, logging, account management, private spaces/secure data gateway, and load balancing
4 CloudHub 2.0 : An elastic cloud of replicas, Mule instances that run integration applications

CloudHub 2.0 Replicas
Replicas are dedicated instances of Mule runtime engine that run your integration applications on CloudHub 2.0.
Capacity: Each replica has a specific amount of capacity to process data. Select the size of your replicas when configuring an application.
Isolation: Each replica runs in a separate container from every other application.
Manageability: Each replica is deployed and monitored independently.
Locality: Each replica runs in a specific global region, such as the US, EU, or Asia-Pacific.

The memory capacity and processing power of a replica depends on how you configure it at the application level.
Replica sizes have different compute, memory, and storage capacities.
You can scale replicas vertically by selecting one of the available vCore sizes:



CloudHub 2.0 Feature Comparison

This table compares the features and support for CloudHub 2.0, CloudHub 1.0, and Anypoint Runtime Fabric on Self-Managed Kubernetes.








Fully managed - MuleSoft provides and manages the feature.
Self-managed - MuleSoft provides the feature, but the customer manages the feature.
Supported - MuleSoft does not provide the feature, but it is available on supported partner platforms. The feature is managed by the vendor, platform, or customer.
Not supported - MuleSoft does not provide the feature, and the customer cannot configure it.

Technical Enhancements from CloudHub 1.0 to CloudHub 2.0
  •     With the added fractional vCore offerings in CloudHub 2.0, you may no longer need to bundle multiple listeners in the same application to reduce your resource usage.
  •     In CloudHub 2.0, private spaces function as improved VPCs from CloudHub 1.0. You can automatically assign a private network for the applications in a private space. You can also configure a private ingress load balancer that auto-scales to accommodate traffic.
  •     By default, VPNs allow high availability.
  •     Applications now have public and private endpoints by default. You can also configure multiple public endpoints. You can access the endpoint addresses in Runtime Manager.
  •     You can make in-place edits and updates to the TLS context and truststore of the ingress layer.
  •     In CloudHub 1.0, application names had to be unique per control plane. In CloudHub 2.0, application names must be unique per private space.
  •     Custom log4j.xml is supported by default to enable streaming logs to external log collectors. You no longer need to contact Support to enable or disable this feature.
  •     You can disable log streaming using Runtime Manager. You no longer need to contact Support to enable or disable this feature.
  •     Self-service logs for the dedicated load balancer and ingress are available via a private space. Titanium users can also download logs through Anypoint Monitoring.
  •     Using ports 80 and 443, applications inside a private space can communicate using internal load balancer via the private endpoint. Note that this depends on application protocol.

Image source: mulesoft.com

Thanks for reading..Happy Learning :-)

Tuesday, April 4, 2023

Kubernetes in simple steps

 What is a Container?

A container is simply like a software unit/wrapper that will package everything - your application code, app related dependencies etc. together.

You can assume like you get a portable environment to easily run your application. You can easily manage the container on your own (operations like starting, stopping, monitoring etc.).

Why Kubernetes?

Suppose, you have a requirement for running 10 different applications (microservices) ~ 10 containers.

And in case you need to scale each application for high availability, you create 2 replicas for each app ~ 2 * 10 = 20 containers.

Now you have to manage 20 containers.

Would you be able to manage 20 containers on your own? (20 is just an example, there could be more based on the requirement). It would be difficult , for sure..?

Orchestration

A Container Orchestration tool or framework can help you in such situations. It can help you automate all the deployment/management overhead.

Once such Container Orchestration tool is Kubernetes.

What is Kubernetes?

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. 

It provides a set of abstractions and APIs for managing containers, so you can focus on building your application and not worry about the underlying infrastructure.

With Kubernetes, you can manage multiple containers across multiple machines, making it easier to streamline and automate the deployment and management of your application infrastructure.

Kubernetes is fast becoming the de facto standard for container orchestration in the cloud-native ecosystem.

Kubernetes Architecture



Control Plane : This is the brain of the Kubernetes cluster and manages the overall state of the system. 

API Server: Provides a REST API for the Kubernetes control plane and handles requests from various
Kubernetes components and external clients.
etcd: This is a distributed key-value store that stores the configuration data of the entire Kubernetes cluster.
Controller Manager: This components ensures that the desired state of the cluster is maintained by monitoring the state of various Kubernetes objects (e.g., ReplicaSets, Deployments, Services) and reconciling any differences.
Scheduler: This component assigns Pods to worker nodes based on resource availability and other scheduling policies.

Worker Nodes : These are the machines that run the application containers. 
Each worker node includes the following components:
Kubelet: This component communicates with the API server to receive instructions and ensures that the containers are running correctly.
Container Runtime: This is the software that runs the containerized applications (e.g., Docker, containerd).
kube-proxy: This component handles network routing for services in the cluster.

Other Key Components -

Pod: A pod is the smallest deployable unit in Kubernetes and represents a single instance of a running process in the cluster. A pod can contain one or more containers.
Container: A container is a lightweight, standalone executable package that contains everything needed to run an application, including code, runtime, system tools, and libraries.
Service: A service is an abstraction that defines a set of pods and a policy for how to access them. Services provide a stable IP address and DNS name for a set of pods, allowing other parts of the application to access them.

ReplicaSet: It ensures that a specified number of replicas of a pod are running at all times. It takes care of auto scaling of the replicas based on demand.
Deployment: A higher-level object that manages ReplicaSets and provides declarative updates to the pods and ReplicaSets in the cluster.
ConfigMap: A configuration store that holds configuration data in key-value pairs.
Secret: A secure way to store and manage sensitive information such as passwords, API keys, and certificates.
Volume: A directory that is accessible to the containers running in a pod. Volumes can be used to store data or share files between containers.

You can imagine Kubernetes as a classical 'Master - Worker' cluster setup. Master node has responsibilities to perform absolutely necessary processes to run/manage the cluster and the Worker nodes would actually run your applications.

So you basically instruct Kubernetes about the application's desired state and then it is responsibility of Kubernetes to achieve and maintain the state.
You need to use YAML or JSON manifest/config files to give the instruction. 
(for example, I want to run 3 different springboot applications each having 2 replicas on some specified ports. I would prepare the manifest files and give it to kubernetes and rest would be taken care automatically. )

Image source: Google

Thanks for reading...Happy Learning ...!

Saturday, April 1, 2023

API-Led Connectivity { Pros & Cons }

 

Businesses today operate in a primarily digital landscape. Customers increasingly interact with brands digitally, and companies use digital platforms and processes. IT experts are in high demand, but the supply is low, as the number of applications leveraged by businesses and consumers increases. With so much occurring digitally, the need for connected business systems is imperative. Your databases and applications must communicate effectively and securely. 
API-led connectivity is one method of ensuring your business can keep pace with evolving technology and digital platforms.


1. API-led connectivity improves productivity

2. Reduce operational costs

3. Quickly launch projects with API-led connectivity

4. Enables new business models and revenue streams

5. Enhance business process management

6. Leverage data from legacy IT

7. APIs drive innovation


APIs allow companies to accelerate their innovation by unlocking their core capabilities as digital assets. These digital assets allow customers to expand their reach and tap into new markets that otherwise wouldn’t have been considered (either due to a lack of resources or due to a lack of awareness). For example, an insurance company who adopted digital technologies and transformed their business to an API-LED digital enterprise could not only sell policies through their own web and mobile channels but also created a platform for collaboration with third party companies or alliances like airlines, travel portals and channel partners to sell their travel and home insurances by white labelling their products.
 

The API-led connectivity approach is beneficial in a large organization where you'll have multiple development teams. Your different lines of business can each work on APIs within their own domain in the process layer. For example, your web, mobile, or third-party partners can connect to APIs at the experience layer. Likewise, the system layer can be managed by Central IT groups associated with your various systems of record.



1. Experience Layer

We can apply security to different application consumers depending on who they are at the experience layer. These consumers could be web, mobile, and third parties who could be either internal or external to your organization. You can provide multiple security policies such as client id, certificates, OAuth, and multiple SLA tiers based on subscriptions. We can manage these consumers by providing or withdrawing access to your applications, authorizing access, and monitoring these APIs to measure the volumes and throughput. This also makes it easy to monetize your products or services at this layer.

2. Process Layer

The process layer will contain your business capabilities and can be divided into various domains. Each LOB can have its own APIs defining the product or services they provide. For example, you would have a set of APIs defining the customer, products, or billing. These APIs at the process level are accessed by consumers from the experience layer. In addition, the process APIs can call other APIs at the process layer or can call on APIs at the system layer, which provides or update information from your systems of record.

3. System Layer

The System layer exposes information from your various systems of record. These will be your legacy systems, your databases, CRMs (Salesforce), and ERPs (SAP). For example, you could add queues, caches, timeouts, and circuit breakers at this layer if you are experiencing performance issues. In addition, some API framework vendors automatically create this layer and use AI to improve performance, eliminate redundancies, and remove unused functionality.

Pros and Cons

Some common complaints I hear related to this three-layered approach are that there are multiple networks hops from layer to layer, and there is an added complexity to this approach. These are the same questions I heard when we moved to model-view-controller or when we used to have different servers for the database and the application. However, a well-designed application will always trump a few milliseconds of performance. An API-led connectivity approach can lead to improved performance by adding caching, spike control, and monitoring of multiple consumers and right-sizing the security on the system layer. Also, note that the security is usually kept at the experience and system layers, whereas the process layer is usually secured with a faster client id and password level security. This can lead to the overall faster performance of your systems.
Two other important benefits of an API Led approach are reusability and the ability to quickly plug in new consumers and systems of record.


Sources:
Image from mulesoft.com