Lade Inhalt...

Architecture of Cloud Federation

Forschungsarbeit 2019 6 Seiten

Informatik - Allgemeines

Leseprobe

abstract

I show how a layered Cloud service model of software (SaaS), platform (PaaS), and infrastructure (IaaS) leverages multiple independent Clouds by creating a federation among the providers. The layered architecture leads naturally to a design in which inter-Cloud federation takes place at each service layer, mediated by a broker specific to the concerns of the parties at that layer. Federation increases consumer value for and facilitates providing IT services as a commodity. This business model for the Cloud is consistent with broker mediated supply and service delivery chains in other commodity sectors such as finance and manufacturing. Concreteness is added to the federated Cloud model by considering how it works in delivering the Weather Research and Forecasting service (WRF) as SaaS using PaaS and IaaS support. WRF is used to illustrate the concepts of delegation and federation, the translation of service requirements between service layers, and inter-Cloud broker functions needed to achieve federation.

♥ 2019 Ajit Singh. All rights reserved.

1. Introduction

With the aid of Cloud computing technology, businesses and institutions make compute resources available to customers and partners to create more capable, scalable, flexible, and cost effective environments for application development and hosting. Cloud computing continues the trend started with on-demand, strategic outsourcing, and grid computing, to provide IT resources as a standardized commodity, targeting real-time delivery of infrastructure and platform services. A next step in this evolution is to have cooperating providers of Cloud services in which a customer request submitted to one Cloud provider is fulfilled by another, under mediation of a brokering structure (e.g., [1]). This latter idea invokes a federation of Cloud domains providing a service analogous to that of interoperating grid resources created for a similar goal by research institutions using grid brokers in the grid computing framework. Fig. 1 is an example of what is meant by a federated Cloud structure mediated by brokers. The figure shows two independent Clouds, each supporting a vertical stack of service layer offerings from the software or application layer (SaaS or AaaS) at the top, through the middleware or platform layer (PaaS), to the operating system and infrastructure layer (IaaS). At each layer a choice is made to fulfill a service request through local resources using delegation, or by a partner Cloud through federation. A key feature of our model, is that federation occurs between Cloud providers at matching layers of the service stack.

Abbildung in dieser Leseprobe nicht enthalten

Fig. 1. Federation and delegation in Cloud application support[4].

To illustrate how this works, consider a business providing a SaaS offering from a private or public Cloud. Users submit requests to the application layer which assesses if su cient local resources are available to service the requests within a specified time. If the application layer cannot meet its service goals it can optionally fulfill the requests through an independent SaaS layer provider of the same service as indicated by the horizontal (federation) line connecting Cloud A to B. Results are returned to the user as if locally produced by the application executing in Cloud A. Federation at the SaaS layer is analogous to the use in traditional business of ‘sub’ or ‘peer’ contractors who supply equivalent final parts or services to the primary provider facilitating elasticity to support a dynamic market. While this approach is common in industry sectors that produce goods or services such as manufacturing or publishing, it is not as common in software due to lack of standard interfaces and insu cient market forces to motivate sharing at the service layer.

2. Federation of Clouds

As in traditional scheduling, where most systems try to achieve the best trade-off between the users’ demands and the system policies and objectives, there are conflicting performance goals between the end users and the Cloud providers. While users focus on optimizing the performance of a single application or workflow, such as application throughput and user perceived response time, Cloud providers aim to obtain the best system throughput, use resources e ciently, or con-sume less energy. E cient brokering policies will try to satisfy the user requirements and Clouds’ global performance at the same time. Thereby, Cloud federation introduces new avenues of research into brokering policies such as those techniques based on ensuring the required QoS level (e.g., through advance reservation techniques) or those aiming at optimizing the energy e ciency. Furthermore, the layered service model proposed in this paper enables the isolation between brokering policies in federated Clouds at different layers which can be implemented following different approaches.

Existing work in Cloud brokering focuses on the federation of Clouds mainly at the IaaS layer such as those strate-gies based on match-making on top of Clouds [9], advanced reservations [10,11] or energy e ciency [12]. More detailed information of these strategies can be found in Section 6.

2.1. Brokering at the SaaS layer

Brokering at the SaaS layer is mainly based on the user’s requirements and Service Level Agreements (SLA) between different Cloud providers. As mentioned in Section 2, a Cloud provider that implements the SaaS layers should guarantee a given level of service for a set of application requirements. The application’s requirements can be generic and/or specific. Generic requirements do not depend on the characteristics of the application and can be used for many types of applications. Some examples are: response time (or completion time), cost (cost of running the application), and level of security. Specific requirements deal with the characteristics and input parameters of the application. Taking WRF as a use case, some specific application requirements are: application version, geographic region, or resolution of the simulation.

Forwarding: if the originator Cloud cannot accommodate the request or another Cloud can provide better cost-effectiveness, the request can be forwarded to another Cloud domain of the federated Cloud. Benchmarking or modeling the applications on the Clouds’ resources may be used to estimate the cost/completion time for a given application, but this is a transparent process at the federation level (each Cloud can have its own mechanisms).

Negotiation: one Cloud may take care of jobs from another Cloud upon agreement. The negotiation can be based on information from both past and future events. For example, a job request might be forwarded to a Cloud at higher cost but doing so may significantly optimize the energy e ciency (e.g., switching down servers and/or CRAC units). Other considerations could be taken into account during the negotiation such as the Cloud reputation (e.g., based on SLA violation rate).

2.2. Brokering at the PaaS layer

Brokering at the PaaS layer is mainly based on the application’s requirements in terms of deployment (e.g., compiler framework) and runtime support (e.g., libraries). Since compiling tools, libraries and runtime environments can be from different vendors and with different characteristics, they can have different licensing conditions, prices and even different functionality and performance. Furthermore, additional characteristics such as fault tolerance or platform security issues can be considered in brokering policies at the PaaS layer. Given the use case of WRF the parameters are based on MPI, such as the MPI compiler characteristics, runtime environment for MPI applications and their associated costs and limitations (e.g., licenses for specific MPI runtime).

The main goals of brokering a federated Cloud at the PaaS layer are focused on improving the applications’ environments, including:

Functionality/availability: brokering over multiple Clouds increases the probability of provisioning with more specialized compilers or execution environments.

Optimize applications: in some sense, the objective is maximizing the potential of the applications to obtain better performance. Policies can decide using specific compilers or runtime in order to obtain, for example, more e cient binaries for a given Cloud.

Fault tolerance and security: when choosing a specific execution environment from different Clouds, fault tolerance and security are attractive secondary goals that may add value to a given decision or they can be primary goals if the nature of the application requires of them.

2.3. Brokering at the IaaS layer

When addressing federation at the IaaS layer, I consider Cloud infrastructures to be hybrid, integrating different types of resource classes such as public and private Clouds from distributed locations. As the infrastructure is dynamic and can contain a wide array of resource classes with different characteristics and capabilities, it is important to be able to dynami-cally provision the appropriate mix of resources based on the objectives and requirements of the application. Furthermore, application requirements and resource state may change, for example, due to workload surges, system failures or emer-gency system maintenance, and as a result, it is necessary to adapt the provisioning to match these changes in resource and application workload.

The goals of brokering methods and policies in federated Clouds at the IaaS layer can be found in different domains. Some examples are listed as follows:

Cost-effectiveness: federated Clouds provide a larger amount of resources, which may help improve cost-effectiveness. This include improvement for both the user and the provider such as, for a given cost, reducing the time to completion, increasing the system throughput or optimizing the resource utilization.

Acceleration: federated Clouds can be used as accelerators to reduce the application time to completion by, for example, using Cloud resources to exploit an additional level of parallelism by o oading appropriate tasks to Cloud resources, given budget constraints.

Conservation: federated Clouds can be used to conserve allocations, within the appropriate runtime and budget con-straints.

Resilience: federated Clouds can be used to handle unexpected situations such as an unanticipated downtime, inade-quate allocations or failures of working nodes. Additional Cloud resources can be requested to alleviate the impact of the unexpected situations and meet user objectives.

Energy e ciency: federated Clouds can facilitate optimizing the energy e ciency of Clouds by, for example, workload consolidation, thermal-aware placement or delegating part of the workload to external Clouds in order to optimize the energy-e ciency of a given Cloud.

Use Case : (WRF) Weather Research and Forecasting as a service

I am presenting the WRF application as a use case for the federated cloud architecture of Section 2. WRF is paral-lel scientific application which performs mesoscale weather simulations of user-selectable geographic areas, with a given resolution for each area. Due to the nature of certain weather phenomena such as hurricanes or tornadoes, performing accurate predictions in very short time spans is vital to make appropriate preparations involving business operations man-agement and government and human related logistics. Thus, sharing of resources between institutions to provide elasticity and dynamic capacity in extreme situations is key. In , an effort to enable the execution of WRF on shared resources is described, mostly focusing on Grid technologies.

WRF benefits from a hosted service architecture since it is a cross-domain application, requiring extensive IT administra-tion and setup expertise in addition to scientific and meteorological knowledge to run it. Establishing WRF as a SaaS using the layer model separates the concerns of the scientists from the underlying platform and infrastructure issues. Efforts to separate these domains of expertise are ongoing, and at the service level a web portal has been developed as one approach to hide IT concerns from users.

3. Service Layer

3.1. Software as a Service layer

I propose a SaaS solution where users can request WRF executions by providing high level requirements. When these requirements are entered via a GUI on a web portal the portal generates underlying files that are needed by the WRF executable. An example are the region files that contain geographic and weather related data.

The input parameters are:

Input files: Files that need to be processed during the experiment. These include a namelist file and its corresponding region files. The namelist file specifies all the runtime options desired by the user. The region files are binary files that describe the geographical area.

- WRF version: Users may need results for a specific version of the software Parallel executions: How many ensemble runs to execute in parallel. (The service allows users to specify ensemble runs, where the multiple experiments on the data are executed, but with different inputs. In the end, the results from all runs are averaged. This may achieve more accurate results.) Deadline: When should the experiment finish.

Cost: How much is the user willing to pay for the service.

The user specifies the listed parameters to define the execution of the tasks without needing to consider PaaS and IaaS details such as machine architecture or virtualization platform.

3.2. Platform as a Service layer

In our architecture, the PaaS layer is constructed by wrapping the MPI libraries and making them available as a service. The motivation for this is discussed in Section 2.1. Additionally, the PaaS layer is in charge of providing and managing the middleware that allows execution of WRF. In this case, I consider the following items as part of this layer:

WRF executables and required libraries: The PaaS layer needs to ensure that the required software will be available at the provider side. It needs to guarantee that the required operating system and appropriate library versions can be accessed at the site.

Software licenses: In the case of libraries or software that requires licenses, such as certain compilers or operating systems, the PaaS layer needs to certify that the required number of them will be available during execution.

- Task decomposition: Another job of this layer is to manage MPI execution, in terms of running the appropriate number of tasks to meet higher level requirements. The user that interacts with the SaaS interface does not need to specify how the experiment has to be decomposed in tasks, but that mapping needs to be resolved at the middleware management level.

3.3. Infrastructure as a Service layer

The IaaS layer provisions the execution environment to run the application. This layer’s interface needs to publish which resources it supports and the associated cost. Also, the IaaS component needs to consider staging-in of data and application binaries – e.g., in the form of Virtual Machine images.

Delegation from the PaaS layer again needs to happen through a translation component. First, the different combinations of resources produced by the prediction model are compared with what the virtualization manager can provide to calculate execution costs, then those parameters (amount of RAM, number of virtual processors, etc.) are passed to the IaaS manager to be used during VM instantiation. Next, the list of software needs to be retrieved by the IaaS layer to provision the VMs. There are different methods to do this, one example would be by associating a virtual disk image located in a file repository with the list of software components; another example would require creating the virtual disk image on demand before execution by aggregating the software packages from a repository.

Once the resources have been provisioned, the IaaS layer instantiates the required VMs and control is given back to the PaaS component, which orchestrates the provisioning of VMs and the execution of the software on them. The platform layer is in charge of issuing MPI calls to define which virtual hosts will take place in the execution, spawning the required number of processes, and ensuring the application is run successfully.

4. Related work

I define federation as a collaborative network of Clouds that facilitate resource sharing with different service layers or models in order to achieve increased dynamic scalability, and effective resource utilization while provisioning during peak demand. In principle, federation can be achieved using various types of Clouds such as public, private and hybrid Clouds, which are defined as:

Public Clouds: Cloud providers that follow a utility based pricing model, offer services that are dynamically scalable and available to general public users. An example of a public Cloud is Amazon EC2.

Private Clouds: Clouds that provide services that are available to a specific set of customers and not the general public.

Hybrid Clouds: Clouds that encompass aspects of both public and private Clouds. The common usage model of hybrid Cloud is that the most sensitive aspects of the service offering is processed in a private Cloud and less sensitive aspects are carried out in a public Cloud.

Many organizations provide their definitions of different Cloud service models. For example, [17][6] defines Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). In the following, I focus on providing very brief definitions but citing exemplary service providers:

Software as a Service (SaaS) provides a highly specialized, single-purpose software to be offered over the Internet and managed by the Cloud provider. This allows the consumer to delegate the task to the Cloud without having much knowledge about the software or the need to allocate resources to maintain the software locally [18]. Some examples of SaaS providers include Gmail and Salesforce.com [16].

Platform as a Service (PaaS) provides a scalable, fault-tolerant and self-managing development environment for con-sumers to build applications on. A PaaS provider should allow developers to build applications utilizing the necessary libraries, provide runtime management for the applications being hosted and enable the platform to be application and infrastructure agnostic. Google App Engine is a good example of a PaaS offering. Other examples of PaaS providers include Force.com, Sun’s Caroline and Microsoft’s Azure.

Infrastructure as a Service (IaaS) provides capacities of storage, network and servers that are dynamically scalable on-demand in the form of a highly customizable environment which consumers can modify to fit consumer requirements . The best example of an IaaS provider is Amazon with its Elastic Compute Cloud (EC2) and Simple Storage Service (S3).

[...]

Details

Seiten
6
Jahr
2019
ISBN (eBook)
9783668949935
Sprache
Englisch
Katalognummer
v488801
Institution / Hochschule
Patna Women's College – MCA Department
Note
8.7
Schlagworte
Cloud Computing Architecture SaaS PaaS IaaS AaaS

Autor

Teilen

Zurück

Titel: Architecture of Cloud Federation