APIs and Services must be part of the environment they are intended to serve. All components and systems in these ecosystems are not static architecture and development artifacts. They develop, grow, and adjust their design and implementation over their lifetime. As the need for functionality and information in the business changes, APIs must adapt to evolving business and technology environment.
Below is Enterprise API Workflow. This workflow shows whole life cycle for from requirement to publish for any APIs.
For this reason, stages in the architecture and development of APIs are depicted as cycles:
Plan ⇒ Design ⇒ Build ⇒ Test ⇒ Deploy ⇒ Run.
Which is broadly categorized as three API life-cycles as represented in the screen shot below, managed by the API Stakeholders.
- Starts with “API Strategy”
API Architecture and API Development Cycle represented above has several phases as represented below:
1.1 API Stakeholders
API Stakeholders consists of different groups of stakeholders as listed below. They manage and participate in various life cycles of the API.
- API owners: sponsor and manage the APIs
- Development community: uses the APIs to implement integration or another functionality
- Users/Consumers: use applications or integrations built upon the APIs discussed above, to achieve some objectives whether commercial or entertainment etc. Users represent the demand-side of the API lifecycle. Demands could be
- need for richer, more granular API
- faster response times
- more visibility of usage patterns
1.2 API Strategy Cycle
The API Strategy cycle typically begins with identification of business needs – e.g.
- build fast interfaces exposed to organization internal or external systems.
- provide access to third parties or mobile devices
- improve and standardize the integration structure internally
Once business needs are identified, the APIs can be designed in preparation for implementation. The following are the key principles driving the API strategy:
- Build reusable assets that enables line of business IT and other application developers.
- Focus on creating reusable assets and emphasize consumption of the assets, rather building for a specific project.
- The assets need to be discoverable and developers need to be enabled to self-serve them in projects
- Options to provide feedback and validate of the API before even the development starts
- Visibility and Control on usage of the APIs.
1.3 Architecture Cycle
The architecture cycle is iterative and is followed by the Development/DevOps cycle which produces the usable API and deployable runtime artifacts. There are several phases in the architecture cycle as represented in diagram below. The center of this cycle is ‘requirements management’, which affects all phases.
1.3.1 Architecture Context
This phase reviews the business and technology context in which API work occurs:
- define the scope of the API program
- establish governance processes and models
- define the key API architecture principles.
This phase also validates and captures business goals and strategic drivers, and identifies relevant API users and stakeholders and their objectives and concerns. The result of this phase is:
- a list of key functional and non-functional requirements for the overall API work,
- identification of relevant patterns and building blocks to address these requirements.
The Architecture Context and Delivery phases can benefit from the establishment and use of a central repository of reusable artifacts, useful for the creation and discovery of reusable content:
- organizational models
- business processes
- data models
- application components
- API definitions
- integration flows
- infrastructure/technology models.
Typically, the repository is initialized with patterns, best practices, templates, standards, etc. needed to start the API work, and is updated during the Architecture Delivery phase with potentially reusable building blocks.
Anypoint Exchange provides such a repository – it captures, discovers and fosters the reuse of integration best practices, connectors, templates, examples, and APIs best practices.
Use of this kind of central repository results in increased productivity for architects and developers, avoids redundant work, and fosters adoption of best practices.
1.3.2 Architecture Delivery
The goal of the Architecture Delivery phase is to define the baseline and target API architectures and identify gaps between them. This is done at 3 different levels each focusing on business, information systems (application and data) and technology aspects as shown in figure below
1.3.3 Business Architecture
This phase identifies existing business-architecture and defines the target business processes, organizational structure and governance model that will be supported or affected by the API work. Gaps between the existing and desired business states must be identified and resolved.
1.3.4 Application and Data Architecture
- Captures the existing services and APIs
- Defines the functionality of the APIs of the target applications which are needed for the Business Architecture, and identifies the gaps between them.
- Defines the integration components and interfaces required to bridge these gaps.
- Captures the data models of the existing application services, the definition of the data models of the new application services, and identifies the gaps between them.
The high-level mappings and data transformations required to bridge the gaps are defined.
1.3.5 Technology Architecture
- Defines the required infrastructure and software components, and by comparing them with the existing infrastructure and software components identifies the gaps between them.
- It also describes non-functional requirements such as availability, performance, scalability, operability and maintainability.
1.3.6 Security Architecture
Covering all the three architecture layers, the Security architecture defines the security requirements, which includes i.e. identity management, authentication, authorization, encryption, non-repudiation, etc. at all levels (business, application, data and technology).
1.3.7 Transition to Development
Transition to development is not the result of deployment of a software release into production, but transpires in the different phases of the project management lifecycle. The figure below shows an iterative project management lifecycle that can both be mapped on more traditional waterfall (gated) and Agile approaches.
The project feasibility phase deals with the planning of API implementation projects and the establishment of an effective DevOps organization. The objective of the planning is to consolidate the approach, distribute project requirements and development-iteration expectations based on business and technical priorities, benefits and dependencies.
Key roles for the API lifecycle are the API coaches and API Product Owner – who determine project vision, roadmap and adopt the API-led connectivity approach to reach mass adoption. API Analysts and API Architects use Anypoint Exchange and enablement assets and provide deep understanding of industry trends, vendors, frameworks and practices to
- make buy/build/partner decisions
- assess high-level costs and ROI.
The table below provides a description of key roles and responsibilities that are needed to support API lifecycle. A definitive team structure and who will play the list roles, need to be decided by PSEG Team.
|API Coaches||Experienced in coaching teams||Coach teams to think differently and adopt the API-led connectivity approach described in sections below.|
|API Product Owners / Asset Owners||From Central IT, ETS or the rest of client organization the API / Asset Product Owner should understand and apply product management fundamentals to each API or asset||Champion the API, engage the rest of client organization to reach mass adoption. Keep API operational and optimize through API lifecycle (inception through to deprecation)|
|API Analysts||Integration and business analysts who can understand project level integration requirements and translate these to Centre for Enablement(C4E) assets (and vice versa)||Generate appropriate demand for the API by triaging project requirements into priority self-serve candidates|
|API Architect(s)||Deep experience of integration and API architecture and a thorough understanding of industry trends, vendors, frameworks and practices.||Provide ‘enough’ governance over the design (using API-led connectivity approach) and operation of the actual API assets. These architects could be part of a wider community, and not just sit within the API initiative.|
|API Lead / Sponsor(s)||Owner of the API within the organization. Strategic and operational management and leadership experience.||Manage the overall success of the API initiative, manage operation daily, measure ROI and performance, manage senior stakeholder and management perception, manage budget and funding.|
|API Developers||Deep experience of integration development, API and Agile methodology.||Develop API assets.|
|API Admins||Administration skills||Administrators that manage the lifecycle of the API deployment, user and api access management, configure the platform for alerts and view/analyze the logs|
The goal of Roadmap & Release planning phase is to develop a release plan and obtain funding and approval. API Analyst, API Product owners, API Architect and API leads execute the following tasks:
- develop feature lists
- prioritize features based on business value and dependencies
- estimate implementation effort needed
- develop initial release plan
- develop more detailed cost estimates.
This phase continues with assigning of appropriately skilled and experienced resources to the identified projects.
During iteration 0 (setup) the environment (hardware, software, team etc.) is set up, and high-level business process-, data and architectural-models are created. Any gaps in skill and experience identified in any aspects of the API lifecycle should be addressed by consistent training, coaching and mentoring of team-members with guidance from the Centre for Enablement (C4E).
The goal of the Development Iteration phase is to deliver working software that the (API) product owners accept. The iteration team is responsible for planning, design reviews of current and forthcoming iterations. The API Coach, API Architect and API Leads coach the team during the development on:
- API implementation strategy
- accurate estimations
- setting priorities on review
That team also share the vision for upcoming iterations.
The Pre-Release phase is preparing to bring the iteration into production. Activities include final system and accordance testing, finalizing documentation, pilot test the release, training end users and deploying it into production.
API leads and API Owners identify defects and enhancements by collaborating with DevOps support; these defects, enhancement-requests are inserted into the backlog for prioritization.
1.3.8 Requirements Management
- Requirements, such as all artifacts of the design and development process, needs to be managed and controlled.
- As a best practice, requirements are usually separated into functional and non-functional (technical) requirements.
- After the compilation and acceptance of the requirements they need to be referenced in the design and in the definition of the different test cases. Using a central requirement and issue-management system such as JIRA is recommended.
1.3.9 Change Management
- APIs following a different integration paradigm, the involved teams should adjust their methods of working to meet different needs.
- A major change is the increased frequency in the delivery of the APIs to the back-end systems, and agile practices are recommended.
- Also important is the management of the outward-facing aspect of APIs.
- The API provider needs to inform the consumer community (whether internal or external users) about new API functionality, changes to the existing functionality with new releases, and the testing functionality (e.g. within the development portal).
1.3.10 Implementation Governance
- Implementation governance is essential during the implementation phase.
- Standard architectural best-practices should be applied.
- During implementation governance, reusable patterns, artifacts, documents should be identified and published to the Enterprise Repository (e.g. Anypoint Exchange).
1.4.1 API Development/Dev Ops Cycle
Attached diagram provides the phases with Mule specific tools associated with each:
Before we dive in to details of each phase of the life cycle below are some links that will help developers to start with:
Setting up Local(laptop/desktop) Environment:
API Manager Usage Guide:
MUnit Usage Guide:
Support-Knowledge Base Articles:
1.4.2 API Design Phase
This phase is concerned with defining and validating the detailed functional and technical aspects of the API.
The design process start with identifying and modelling the API resources and relationships among them, identifying the operations to manage the API resources and mapping them to standard HTTP methods, and, finally, definition of the representation format of operations’ request and response messages.
Anypoint Platform for APIs provides capabilities to simulate the API and solicit feedback before exposing the actual backend functionality through the API; this allows the functionality and ease of use of the API to be validated before much effort is invested.
APIs are living software artifacts; they exist over time and will get adjusted and modified during this time. APIs evolve with time, and the design-process needs to take this into account. The design-deliverables (RAML and supplementary documentation) needs to be adjusted as the API is modified and adjusted.
This phase consists of following sub tasks:
|Process Steps||Tools Used||Roles|
|Identify and understand the business use case.||Sharepoint, MicroSoft Word and any other tools that are used for requirements management||Business Analysts, API Owners, API Developers, API Architects|
|Identify APIs, Resources and EA Approval||Anypoint Exchange and Anypoint API Designer||API Owners and API Developers, API Architects|
|Create API Assets||Anypoint Manager: API Designer and API Portal||API Owners and API Developers|
|Validate and Feedback||API Portal, Exchange, Post Man or SOAP UI||API Architects, Business Analysts, QA|
1.4.3 Identify Business Use-Case
Identify the high-level business functionality which needs to be exposed by the different applications, be referring to use-case documentation from client-applications or requirements-gathering sessions with teams who are requesting the API.
1.4.4 Identify APIs, Resources/EA Approval
API Owners or Developers first:
- Search in Anypoint Exchange if an API of similar functionality exists
- If exists, validate/verify with EA/API Architect to change the existing version by versioning it.
- Identify the high-level resources based on business-needs.
Enterprise Architecture team (group of Architects from several business and technology domains) need to be involved in deciding and approval of API category and ownership of the API.
- API Owners provide the EA or API Architect with API requirement details, API category and resource details.
- API Architect approves the API category (data, process, experience or proxy) and ownership of the API.
1.4.5 Create API Assets
API Owners/Developers build the API definition and other assets using Anypoint API Manager/Design Center.
Following are the API Assets that needs to be developed:
- RAML Specifications in API design center. RAML contains the structure of the API signatures, request and response data-schemas, and examples for the API-consumers to use for rapid testing and development with mocks. Well-formed, comprehensive RAML provides developers with a seamless experience – ability to interact directly with the (mocked) API
- make requests
- receive request-validation messages – including actual HTTP response-codes applicable to the request made
- receive responses (with mocked response data)
- Publish to Exchange: Portals allow for the definition of arbitrary content that allow for the better documentation like extensive explanations on how to access the API, methods, diagrams etc. of an API.
1.4.6 Validate and Feedback
Once the API is defined and API exchange is published, publish/email the API exchange link to Business Analysts, API Architects, Application Developers and QA team for review and validating the API. The team can use the Exchange portal to test various scenarios, validate the expected request and response structure and the business needs. The team provides the feedback through Exchange Portal itself.
1.4.7 Secure the API
- Like all gateways to the outside world, APIs need to be secured and monitored.
- Securing the APIs includes that the user of the API can be identified, authenticated, and the authorization of the API’s use can be verified. But it is also important to understand that the success of the implemented APIs is depending on the ease of the use of the API: If the complexity of the security gateways is too high, the API will be rejected by the user community.
- With the use of industry standard authentication protocols, it is possible to reduce the effort of securing the API whilst allowing the use of the API with limited effort. Using an accepted mean of securing the interface allows to establish a sufficient level of security while also allowing the API to be used with limited extra effort. It is part of the design process to establish the “right” level of security for the API.
- The use of custom security protocols is discouraged as:
- the effectiveness of the protocol has to be ensured (industrial accepted protocols have been reviewed and accepted by a wide range of security experts)
- the protocol needs to be maintained covering an evolving number of security threads
- the user of the API has to adopt to a special protocol
In the following section, several standard protocols and standards are described which could be used to secure the APIs.
API Security step consists of following sub tasks:
|Process Steps||Tools Used||Roles|
|Identity Management||Anypoint Access Management||API Admins/Release Management|
|Policy and Endpoint Configuration||Anypoint API Manager||API Developers|
|API Assets Export||Anypoint API Manager||API Developers|
18.104.22.168 Identity Management
Identity Management is concerned with identity or user account used to access the Anypoint Platform portal. By default the Anypoint Platform portal identity management is used and requires user registration with the platform.
Identity management also allows for definition of custom roles for users across multiple environments.
For example, one can create a role called “DevOps” that has full control in development and QA, while retaining some visibility into a production environment for debugging. The Anypoint Platform supports federated external identity management.
Federated external identity management is required to enable an organization to use their existing user credentials (UID and password) to login to Anypoint Platform Portal and establish single-sign-on with other systems that are also federated.
PSEG plans to use Okta(Via SAML) as an Identity provider. Once SSO URL is setup with Okta, the URL for anypoint platform could be like:
22.214.171.124 Client Management
Client management is about managing the identity of the anypoint platform API clients. As an API owner, you can apply an Anypoint Platform OAuth 2.0 policy to an API to authenticate client applications that try to access it. You need an OAuth provider to use the policy. Anypoint platform supports OpenAM(version 11 or 12) and PingFederate as OAuth Providers. PSEG currently do not have an oAuth provider but if they decide to procure one, then the configuration for client management is performed at Organization level and below is the link to get more details on that
126.96.36.199 Supporting Client ID / Client Secret
For managing access to API, this out-of-the-box policy enabled, all calls to an API must include a valid client ID and client secret. To obtain these, the API consumer must register an application in API Portal(Exchange).
When applying the policy, by default, the credentials are expected in the form of query parameters named client_id/api key and client_secret/api secret. The API Developer can alter the Mule expression that points to this, and obtain these values from elsewhere in the HTTP message.
188.8.131.52 OAuth2 Support
OAuth2 is an open standard for authorization by providing client applications a secure delegated access to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials.
Designed specifically to work with HTTP, OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The client uses the access token to access the protected resources hosted by the resource server. OAuth2 is commonly used as a way for Internet users to log into third party websites using their Microsoft, Google, Twitter, etc. accounts without exposing their password.
A lot of libraries and implementation exist for most of the popular providers to help you or your clients to consume your secured API with OAuth2.
184.108.40.206 Policy and Endpoint Configuration
Before an API is deployed, policies that govern how it can be used must be defined and implemented. These policies will dictate who can access the API, how users will be authenticated and authorized, and how much traffic they may consume.
Implementing policies to control access (authentication, authorization, but also SLA and response time) is critical to keeping APIs and the underlying services they use managed.
Policies e.g. like a limitation on consumption are important to keeping the API performing at peak levels and with the set expectations of the API consumers. Most solutions provide several pre-built policies out of the box in a policy library to manage common tasks like rate limiting, throttling, and security enforcement. In addition, they will allow for custom policies to be created.
Following are some of the key OOTB policies available and how they are applied:
|Client ID(API Key) Enforcement||Applied at resource level for all consumers|
|Cross-Origin Resource Sharing||Applied at resource level for all consumers|
|HTTP Basic Authentication||Applied at resource level for all consumers|
|IP Blacklist||Applied at resource level for all consumers|
|IP Whitelist||Applied at resource level for all consumers|
|JSON threat Protection||Applied at resource level for all consumers|
|LDAP Security Manager||Applied at resource level for all consumers|
|Oauth 2.0||Applied at resource level for all consumers|
|OpenAM Access Token Enforcement||Applied at resource level for all consumers|
|PingFederate Access Token Enforcement||Applied at resource level for all consumers|
|Rate Limiting||Applied to all API calls, regardless of the consumers/source.|
|Throttling||Applied to all API calls, regardless of the consumers/source.|
|XML Threat Protection||Applied at resource level for all consumers|
|SLA Tiers||Applied to specific consumer based on the SLA Tier Selection. SLA tiers can be defined as “Platinum”, “Gold”, “Silver” each with rate limits. This SLA tiers is selected when giving access to an API for a consumer. But the tiers need to be created before so they are available for selection|
1.5.1 API/Service Build-Test Phase
Following the API design phase, the build and test phase translates the defined API definition of the design phase into well-tested, implementation artifacts.
Once API designed, the API needs to be built by connecting to the backend services or applications that will power it. If the API will be connecting to existing web services, this connection will be a simple proxy and should be easy and fast to configure.
However, if data needs to be orchestrated across multiple systems on the backend, transformed to a new format, or if the backend system is legacy, or otherwise difficult to connect to, the API build is more complex. In this case, integration and orchestration capabilities are required.
MuleSoft’s Anypoint APIKit and associated tooling makes creation of well-designed REST APIs a quick and efficient process. Importing a RAML file allows automatic generation of the following items (screen shot below):
- A main flow with an HTTP endpoint, an APIkit Router, and an Exception Strategy reference
- Skeletal back-end flows, one for each resource/HTTP-verb pairing in the RAML file
- Several Global Exception Strategy mappings
The build phase consists of two main tasks: build and test that will produce a service with API as shown below:
|Process Steps||Tools Used||Roles|
|Develop API using the API Specs||Anypoint Studio, Post Man or SOAP UI, Anypoint Exchange||API Developers|
|Test the Service||Anypoint Studio, MUnit, Post Man or SOAP UI||API Developers|
1.5.2 Develop API/Service using the API Specs
API Developers follow the service design standards and best practices documented in section “Service Design Standards and Guidelines”. After the service specification (not RAML specs) is created using the design standards and guidelines, start the build/development using the steps below:
- API developers connect to internal Anypoint Exchange and search for templates related to the API Type and import them in to workspace. Public Exchange has several pattern templates for reference.
- Modify the imported template to include the API definition and API specific transformation code.
- Mavenize all the projects and add dependencies in pom.xml for all required jars. The template already has the pom.xml for referring Mule artifacts. Please note that Maven is already embedded with Anypoint Studio 7 which PSEG is going to use. But its recommended to use an external Maven setup so that Nexus Enterprise credentials for the enterprise could be added.
- If referring a third-party jar file, search the public maven repository for dependency artificatId, groupId and verison information. Instructions to setup maven in individual laptop/desktop is described in Maven C4E document.
- Write MUnit test cases for all flows within the API, MUnit should have mocks for external integrations. More details on the Testing described in section below.
- Deploy and test in runtime embedded in Anypoint Studio.
- Request Dev Ops team for build and deploy job creation.
- Check-in the code and API assets in code repository (GitHub for PSEG) .
- Request the Dev Ops/Release management team for creating alerts in Runtime Manager.
The testing phase is the second part of the building process of the functionality. It takes the previous generated designs and implements the functionality, resulting in artifacts ready for deployment / publishing.
1.6.1 Unit Test
A successful Unit Test completes the building effort of the developer. The Unit Test should be a documented artifact within the development line, based on the design of the API. A recommended way to execute the Unit Test is to use tools for the design and execution of the test:
- JUnit for the testing of potential Java components
- MUnit for the implementation of Unit Tests in the Anypoint Studio environment. MUnit allows the implementation of test flows as part of the Anypoint Studio implementation of application flows. Each test flow provides reference to the implemented flow, additional assertions, pre-loading of values, and evaluation of return payloads and messages are supported.
As with all good unit tests, the Unit Test is a “white box” test (the test uses the knowledge on the implementation of the functionality). It is important that the test executes all lines of process in the application (full coverage), provides testing with valid and invalid input data, and ensures that all possible endpoints of the flow are reached correctly.
Since the Test Flow is part of the Anypoint project and is implemented in a separate folder in the development environment, the implementation of the Unit Test is automatically documented and can be repeated (e.g. by the test team) as required.
The test shouldn’t actually communicate with the external systems (target end-point). Instead, it should perform a test using single flow with proper message payload generated and processed by the next available message processor.
Below are some tips for writing unit tests:
- Targeted: Unit tests that test one thing (including one set of inputs) at a time are targeted. The ideal unit-test is one which only examines one function or message processor of the Mule-flow. It’s easier to write a simple unit-test covering one flow using functions of mocks and stubs so that they can isolate flow and create defined inputs for the flow and Message-processor. This allows you to test the output of any piece of code and see if it matches with expected output.
- Isolated: The code you are testing should be isolated from other code in the application as well as any external dependencies or events
- Repeatable & Predictable: A unit test should be capable of being run over and over and assuming that the code under test and the test itself have not changed, producing the same result.
- Independent: Never make your tests dependent on each other. The order of execution should never matter! That makes the tests hard to debug and maintain. There should not be any assumption that your unit tests are going to run in any specific order. Nor should your tests expect or require this.
- Mock out all external services/end-points: It is suggested to mock the external services or endpoint calls in order to avoid overlaps during multiple tests and that different unit tests can influence each other’s outcome.
- Avoid unnecessary preconditions: Avoid having common setup code that runs at the beginning of unit tests. Otherwise, it’s unclear what assumptions each test relies on, and indicates that you’re not testing just a single unit.
- Name your unit tests clearly and consistently: Given a meaningful & business friendly name for each MUnit test in order to related it to the functionality. Also, add a description, comments at the top of each test-case keeping in view of the code maintainability & enhancement support.
- Keep it Simple: It’s also a good idea to implement simplicity in your development methodology. Readability and maintainability will make it easy for the person that takes over the tests after you, to jump in and make the changes. Readable test can also serve as internal documentation for your feature. Less time spent on writing documentation gives you more time to write tests!
1.6.2 Integration Test
The integration test is a functional and technical black-box end-to-end test. It shows the correct implementation of the different processes and flows. As part of the integration test, the implemented functionality is triggered and calling either the concrete target systems or test environments that are dedicated to perform integration tests.
1.6.3 Regression Test
The regression test is an assurance test. It is a re-test of previous implemented and tested functionality. A regression test should be executed whenever new functionality is implemented and tested. It assures that the existing functionality is not changed as part of new developments or changes. Very often, regression tests are automated tests, to be executed as part of every test cycle.
1.6.4 Build/Test Iteration
Based on the used implementation methodology (e.g. using agile or scrum methodologies), the development and test phases are iterating in faster cycles than the main development process.
1.7.1 API Deploy-Publish
The tested and accepted artifacts of the development and testing phase are implemented in the pre-production and production environments.
As APIs are shared resources inside and outside the organization they belong to, it is essential to provide adequate documentation when publishing them.
The documentation should be easy to find and publicly accessible. It should provide as minimum the following information:
- Authentication, including acquiring and using authentication tokens
- API stability and versioning, including how to select the desired API version
- Common request and response headers
- Possible returned errors
- Examples of complete request/response
Once the API providers releases API they should commit not to break the API contract without notice. The documentation must include any depreciation schedules and details surrounding externally visible API updates.
1.7.2 Publication of an API to Exchange
Exchange is a community environment for the exchange of information, best practices, patterns, and connectors. The entries in Exchange can be made readable to audiences within PSEG. The API Owners/Creators need to publish the RAML/WSDL to exchange so that is available for reference and discovery for re-use.
1.7.3 API Operation/Management
This section describes the operational side of the development cycle. It focusses on the core part of the phase: Monitoring and Analysis of the behavior of the APIs and the connected applications.
1.7.4 Request Access
- This developer portal can be exposed to external consumers.
- API definition and console are visible to the consumers so they can understand the functionality and interface.
- Once they identify the API’s they need access, they request access by submitting “Request API Access” button shown in upper right corner of the API developer portal.
- The request for approval goes to the “API Creators” who created the API in API Manager.
1.7.5 Provide API Access
The API Owners/Creators will receive emails for API access request. The API Owners/Creators login to “API Manager” and grant access to the consuming application. The grant access provides “api key and api secret” that need to be shared with the consuming application team. Note that this API Key and API Secret are specific to each environment and version of the API. The API Owners/Creators can “approve” or “reject” or “delete” the request.
Once approved, the “API Consumers” who requested the access will get an approval email with client/api key and client/api secret.
1.7.6 Monitor and Analyze
As the APIs are productive it is important to ensure that their performance and usage is within the realm of the expectations. Monitoring the running APIs provides visibility of the availability and performance of the APIs, and to ensure that they fulfill the expectations of the consumer community. The analysis (and visualizing the of the results of the analysis) provides important information for the continuous improvement of the installed system and gives input for the design process.
Typical outcome of the Analytics are:
- Detailed metrics on the usage and performance of the APIs, including custom reports on the user community (the actual user of the API) and the performance of the interfaces
- Information regarding consumers (developer and applications) – which can be aggregated to a consolidated view on the usage of the API
- Forecasting on business and technical level to provide input for future investments
It is best practice to plan the different level of reporting and monitoring during the design phase. This allows to implement the necessary reports and functions from an early stage of the use of the APIs.
1.7.7 Notification and Alarming
To ensure the performance of the APIs and related applications in a productive environment it is important that exceptional situations or occurrences of predefined situations are brought to the attention of the responsible team. iPaas provides two distinct mechanisms used by deployed applications to report on their status and situation: Notification and Alarming.
- Notification are standard messages, which are generated during the execution of the application and appear in iPaas
- Alerts report on abnormal situations. Based on the configuration they are targeted to provide information on the situation, i.e. by email
iPaas provides several alert types out of the box
- Performance: a defined number of events exceeded the time period for processing
- Deployment: a new deployment in the monitored environment as been completed successful or in error
- Connectivity: a secure data gateway has been connected or disconnected
- Problem: iPaas encounters a problem either with a worker or with an application which is monitored by the worker monitoring system
A standard alert creates notification on console and an alert action (an email or a message into a monitoring system). Additionally, Custom Alerts can be generated by applications. These are triggered by notifications send to the iPaas console by the application.