The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.
Keywords: Cloud Computing, Distributed Computing, Grid Computing, SlapOS, Distributed Cloud Computing
1 Introduction
Internet users are increasingly adding video content to existing online services and applications, therefore having the effect that the number of people viewing videos online has grown over the past year and the time spent per viewer has increased accordingly. Google sites, including YouTube, continue to be the most watched online video sites with more than 35.4 million Google sites visitors watching YouTube [1].
Many real-world systems involve large numbers of highly interconnected heterogeneous components over the Internet. The cloud is among one of the more promising systems that will be deployed at a large scale in the near future because this field counts on many success stories: Amazon EC2, Windows Azure or Google App Engine [2].
Cloud computing is traditionally divided into three market segments: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). To better understand cloud communications, it is useful to understand the different service models of cloud computing [3]. The best known is SaaS where the customer purchases access to an application that is hosted and runs in the cloud. PaaS refers to access to platforms that allow the customers to deploy their own applications in the cloud, and IaaS is at a lower level with access to the systems, storage, network connectivity, and OS management.
Recent research on Cloud Computing has focused on the implementation of Service Level Agreements (SLA) and operation of large Data Centers [4]. However, in case of Force Majeure such as natural disaster, strikes, terrorism, unpreventable accidents, etc., SLA can no longer be applied. Rather than centralizing Cloud Computing resources in large data centers, Distributed Cloud Computing resources are aggregated from a grid of standard PCs hosted in homes, offices and small data centers.
Based on the proposed scenario, several questions arise regarding its performances and efficiency. Cloud nodes reports on resources used and trusting clients to report billing values is a well-known security issue. The security mechanisms included in the proposed solution are setup to prevent a node from cheating on reported billing values. However traffic on unencrypted links could be intercepted and it is possible for a node to join the cloud and start sniffing sensitive data. Therefore, the optimization of the authentication process for distributed applications is needed [5].
Currently, the cloud model can be used for other scientific fields. This is the case with the so-called CloudRAN (C-RAN) [6] used in telecommunications. It is a cloud computing based new radio access network architecture that can support 2G, 3G or 4G systems and future wireless communication standards. Due to the ever growing complexity of the radio access networks, the cloud computing paradigm can be successfully applied for the computationally extensive algorithms that are used in telecommunications networks [7].
2 Cloud Computing Technologies and Systems
In this section the technologies and systems that are the basis for cloud computing solutions and the role of communication networks in such solutions are presented. The timeliness of the proposed concept of cloud communications is justified and a brief summary of projects aimed at the development of cloud computing solutions is presented. Also, technologies like grid computing, virtualization systems and communications aspects such as IPv6 networking are evaluated.
Cloud computing is the delivery of computing and storage capacity as a service to a community of end-users. Cloud computing also extends the concept of IT services by combining user data, software and ondemand computation resources over a network. It relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network (typically the Internet). At the foundation of cloud computing is the broader concept of converged infrastructure, virtualization and shared services.
The traditional layered approach implicitly supposes that the IaaS layer of Public Clouds is implemented by very large server farms, which are supposed to provide optimal efficiency through economies of scale and automation. The IaaS layer of Private Clouds is implicitly supported by expensive Storage Area Networks (SAN) hardware. There are several efforts already under way, including the Distributed Management Task Force (DMTF) Open Cloud Standards Incubator, the Open Grid Forum's Open Cloud Computing Interface working group, and the Storage Network Industry Association Cloud Storage Technical Work Group. In France the Free Cloud Alliance promotes the first Open Source Cloud Computing Stack which covers IaaS, PaaS and SaaS with a consistent set of technologies targeted at high performance and mission critical applications. A great resource to see the spectrum of cloud standards activity can be found at the OMG's cloud standard wiki [8].
On the SaaS side, cloud communication services support embedding communication capabilities into business applications such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) systems. For "on the move" business people, these services can be accessed through a smartphone, supporting increased productivity while away from the office. These services are over and above the support of service deployments of VoIP systems, IP contact centers, collaboration systems, and conferencing systems for both voice and video. These services can be accessed from any location and linked into current services to extend their capabilities, as well as standalone service offerings. In terms of social networking, using cloud-based communications provides clickto- call capabilities from social networking sites, and access to Instant Messaging systems and video communication, broadening the interlinking of people in the social circle. Virtualization of resources is the core of any cloud computing architecture, allowing the use of abstract logical interfaces for accessing physical resources (servers, network, storage). Among the methods of simulation of the interface to the physical objects are
(a) multiplexing - creating multiple virtual objects in a single instance of a physical ob ject , such as a processor to process multiplexed multiple linked processes (threads) ,
(b) emulation - building a virtual object in a physical object of another type , such as a hard disk can emulate physical RAM (through an interchangeable file or partition - swap ),
(c) aggregation - creating a single virtual object from multiple physical objects, for example a number of hard drives to form a RAID disk unit,
(d) Multiplexing combined with emulation - for example TCP emulates secure communication channel and multiplexes the data transfer between channel physical communication and processor.
Along with virtualization, service-oriented architecture (SOA) concepts and Web 2.0 services are the fundamental technologies for the creation of a cloud. The latter two can be connected and coordinated together representing a SOA architecture whose components are implemented as independent services that communicate with messages, but do not necessarily need web service technology.
In systems based on cloud computing, the above mentioned technologies for infrastructure, platforms and virtualized applications are implemented as services (usually web services), and are offered to users in the form of service-oriented architecture systems. Most public cloud systems, provide access to services via standardized interfaces and protocols, as described on the services of Web 2.0 and RESTful (Representational State Transfer - a description of the software architecture model based on HTTP).
Addressing Internet resources is one of the main characteristics of a cloud network, so the migration to IPv6 is one of the important topics, as will be shown below. Protocol IPv4 address space is 232 (4.3 billion) addresses, a number that has turned out to be insufficient. On January 11, 2011 the IPv4 space address was officially announced as exhausted [9]. In contrast, IPv6 using 128 bit addressing has a capacity of up to 2128 (3.4 x 1038) addresses.
The advantages of a virtual routing system based on virtual machines are considered, as they are able to isolate each different instance (as stored in files or operating system files), the ability to easily perform common operations (start, stop, create, delete, copy, move), the ability to achieve additional operations (return to saved snapshots, backup and restoration of snapshots), and last but not least, the simultaneous use of different protocol stacks (IPv4 and IPv6).
3 Distributed and Decentralized Systems
Research over many decades in the field of parallel and distributed computing is the basis for the development of cloud-based computing systems, considering that many of the problems identified in the resource management algorithms were already solved, or problems at the implementation level can be avoided. The cloud system is an evolution of distributed computing systems that extends the following paradigms:
(a) Computing grid systems,
(b) Utility computing (Computer processing systems as utilities),
(c) Internet Computing (Systems using the Internet for calculations),
(d) Autonomous computing systems,
(e) Edge computing (Computing systems at the network perimeter),
(f) Green computing (Computing systems with low environmental impact).
Another important aspect is the issue of communication and modification of existing applications to convert them to the SaaS model. As we will explain in the next section, the P2P volunteer computing paradigms contributed to the development of distributed computing systems and decentralized applications.
The P2P model can be viewed as a precursor of cloud systems, being a variant of a distributed system that was based on flexible and least cost access to processing and storage resources provided by participants in the system (that are in different administrative areas). As a definition, a P2P (Peer-to-Peer) system represents a distributed and decentralized network architecture, where peer nodes are typically user terminals. These nodes have equal attributes and tasks, and also function as a consumer, but also as a data source by providing a portion of the processing and storage capacity for use by other nodes, thus relieving network from certain tasks [10].
For building a cloud network, the transformation of the Chord [11] protocol structure, from a typical ring topology, to a master - slave multi- ring structure, by using the address grouping mechanism of IPv6 is proposed. Also, taking into account factors such as performance and reliability of a node in the cloud to manage heterogeneous resources efficiently, we propose to calculate the reputation and performance of a node by extending the P2P mechanism Credence [12], resulting in a decentralized algorithm where each node uses the locally stored information to evaluate other nodes and to share assessments of performance and reputation nodes neighbors. Reputation is very important for these systems, where unstable or even malicious nodes may appear. The reputation is calculated for each node based on their matching between their own votes and the votes of nodes in a group with similar voting criteria.
4 Distributed Cloud Computing Architectures and Algorithms
Cloud architectures can be analyzed from two different perspectives, from an organizational perspective and from a technical standpoint. In this section we will discuss first from the organizational point of view, which implies a distinction of the domains in which users and service providers are organized, and how they are separated. Next we will research technical functionality and algorithms for cloud resource management system.
The traditional way of IT in a company can be very problematic. As changes in the business environment occur, so appears the need to implement more efficient enterprise systems. Cloud Computing technology represents the next enterprise computing paradigm and a solution to most current enterprise IT problems. This section examines how Cloud technology has evolved and the way it affects related performances by presenting a case study based on a proposed open source Cloud platform.
A deployment model defines the purpose of the Cloud and the nature of how the Cloud is located. The NIST definition [13] for the four deployment models is as follows:
* Public Cloud: The public Cloud infrastructure is available for public use alternatively for a large industry group and is owned by an organization selling Cloud services.
* Private Cloud: The private Cloud infrastructure is operated for the exclusive use of an organization. The Cloud may be managed by that organization or a third party. Private Clouds may be either on or off-premises.
* Hybrid Cloud: A hybrid Cloud combines multiple Clouds (private, community of public) where those Clouds retain their unique identities, but are bound together as a unit. A hybrid Cloud may offer standardized or proprietary access to data and applications, as well as application portability.
* Community Cloud: A community Cloud is one where the Cloud has been organized to serve a common function or purpose.
Networks based on cloud computing offers the ability to manage resources that are theoretically almost unlimited in terms of computing power, memory, processing and storage, while customers have the ability to adjust the dynamic consumer needs in a short time, usually a few minutes [5].
To manage a cloud services network, five specific factors were defined [14], which will be detailed in the following: self-service according to need; network accessibility; shared resources; rapid elasticity, measurable services.
Policies are necessary to determine which are the principles by which decisions are made for resource management, and thereafter to determine the mechanisms for implementing those policies. The cloud resource management policies can be grouped into five classes:
* Access control - is restricting access to a system, in the sense of accepting new processing tasks under the control policy , but also to complete tasks already in work requiring knowledge of the state to limit overall system ;
* Efficient allocation of capacity - refers to the allocation of resources for each active instance of a service in the cloud, as a global optimization problem, which requires the search for resources in a space with constraints and frequent changes of the component systems.
* Network load balancing (Load balancing) - can be done locally with energy optimization to distribute processing tasks equally in a group of servers.
* Energy optimization - is one of the main objectives to reduce service costs, and can be achieved by focusing processing tasks on the lowest possible number of servers, and connections to other servers in standby mode. It is, however, difficult to manage QoS in this context.
* Ensuring QoS - can be treated as an optimization problem of resources, but the models require complex calculations that cannot be performed efficiently in a short enough time for management decision making and resource allocation
Resource management is the core operation of any computer system, and is subject to three basic criteria for evaluating system performance, functionality and cost.
Implementation of resource management policies can be achieved by four types of mechanisms, which rely on a well-defined approach rather than ad- hoc methods:
(a) Control theory - using the principle of feedback in order to ensure stability of the system and predict the transition [15], but can only predict the behaviour of local, rather than global, with simplified models, but unrealistic use of Kalman filters for it;
(b) Utility algorithms - need a performance model and a mechanism for allocating the cost and coordination of its user-level performance [16];
(c) Machine learning systems - uses a branch of artificial intelligence techniques by which a system can learn from the process data. An advantage is that these techniques do not require a performance model of the system [17], and can be applied for coordination of multiple nodes that are themselves managers of autonomous systems;
(d) Economic mechanisms - are mechanisms that take into account operation principles of a free market trading of resources and does not require a model of the system, for example using combinatorial auctions for resource packages [18].
The performance model of a cloud system may become very complex cloud and a part of the analytical solution as described above is reaching its limits in the case of a large number of nodes. Also in some cases it was found that monitoring systems that require collecting information on the state system may be regarded as intrusive or models cannot provide accurate data.
As parameters needed are the average use of a CPU, memory, storage space and power consumption, a strategy is considered improved when reducing the number of requests to controllers to change the amount of virtual machines. A theoretical approach to optimal control proved difficult in terms of the amount of computation required. Also convincing results cannot be based on empirical values of some parameters for optimal control equations. Another approach to combinatorial auctions offered a simple solution for resource management, reducing the problem to that of a packing model for sets of resources.
As innovative solutions we consider the use of epidemic algorithms (infection / immunity) to reproduce by cloning instead of crossing for virtual machines, and the application of at least one mutation of each individual in the population, resulting in a selection of the best individuals and the loss of diversity below 70%, compared to 90% for genetic algorithms.
Thus, this approach foresees that in the future we will use servers with normalized performance, uniform communication links and data centers composed of modular components easily interchangeable when new modules are available as the technology improves.
5 Cloud Computing Services
In this section we present the services offered by a cloud computing based system, and how they can be grouped into the category of resources offered in certain levels. We underline the difference between cloud services and their attributes, and the concept of cloud computing as described in previous sections, the latter being actually the technologies and systems that enable the creation of cloud services.
As was shown in previous sections, cloud systems allow the implementation of heterogeneous systems and technologies, resource allocation mechanisms and applications as Web services. Consequently, it is no wonder that the cloud services are very diverse, and at first sight very heterogeneous in terms of functionality and use.
Based on the above conceptual architectures, we will build first a comparison between cloud computing and its services, and then carry out a chart that will help categorize the different levels of service. This will allow comparison of instances of each class and is useful for determining the equivalence classes and to find some complementary cloud services to achieve optimal solutions for different usage scenarios.
In Section 2 we found that there are many standardization initiatives for defining types and classes of cloud services, and structuring of the levels of service delivery, best known as IaaS, PaaS and SaaS. Management strategies for these levels are different, but as common property there is the very high demand fluctuation of resources that cloud service operators should take into account to streamline the allocation of resources by implementing mechanisms for the elasticity attribute type. In some cases, traffic peaks can be managed, for example for web services that have seasonal peaks, but contingency must be implemented at the level of automatic allocation mechanisms. This implies that there is a pool of resources that may be issued or allocated according to demand and that a monitoring system is available with a control mechanism to decide in real time reallocation of resources.
Further detailed are the levels IaaS, PaaS and SaaS. Infrastructure as a Service (IaaS) is one of the most popular ways to provide resources as a service, known also as Hardware as a Service (HaaS). It can be divided into several categories, the most important being:
(a) Computing as a Service (CaaS) where virtual machines are rented and priced according to resource consumption in a given unit of time, the main memory, CPU , features of its operating system and pre-installed applications,
(b) Data as a Service (DaaS) , also known as storage as a Service (STaaS), where virtually unlimited storage space is provided for storing files, regardless of size and type, being charged according to the amount of data stored or transferred ,
(c) Network as a Service (Network as a Service - Naas) refers to cloud services which provide virtual network connections, such as VPN or MVNO, or by infrastructure sharing that may belong to third parties e.g. communication operators[19].
Platform as a Service (PaaS) are services mainly used by developers to code applications for users, rather than to be directly accessed by the latter. The platform is essentially a middleware that provides a programming environment and run by different applications written in different programming languages can be offered as services.
Software applications are offered as cloud services make up the level of Software as a Service (SaaS), being placed in terms of service delivery levels above PaaS and IaaS. This model has the advantage that users do not need to install software on local resource limited equipment, but can be easily accessed and configured through the web interface.
Recent scientific literature introduced new concepts of other applications offered as services (XaaS), culminating in the topmost level in the form of people offered as a service over the stack of cloud-based computing, the so-called Human as a Service (HuaaS). This approach emphasizes that the paradigm of "cloud computing" is not limited to technological resources, but can be expanded to provide services through the participation of human beings as resources. As a subset of HuaaS the term of crowdsourcing is emerging, which describes a service provided by a group of interconnected people performing certain tasks or solving some complex problems, including crisis situations [20].
In [21] the authors propose a service for managing contextual information in large scale distributed systems. This work proposes a concept of Reasoning as a Service (RaaS) and is based on XML messages for configuration of M2M services, which adapts according to the changing context. In addition, authors in [22] propose a system for management of communications that use the services of contextual information for communications platforms with a goal to make the user interaction more effective.
These services are also the foundation for the new paradigm of the Internet, such as:
- Internet of Things (IoT): a global infrastructure of sensor networks and devices, based on interoperable communications protocols interconnecting physical and virtual objects in an information network;
- Internet of services (IoS) standardized interfaces, open and configurable which enables different applications to function as interoperable services using specific semantic understanding, aggregation and processing of information derived from different sources, formats or other levels services ;
- Internet of People (IoP) as shown by the concept of HuaaS, people become part of intelligent heterogeneous networks, being able to connect, interact and share information easily between them and the social or environmental;
- Internet of Everything (IoE) : the ability to connect any device capable of providing web service interfaces for accessing their natural human-machine interaction.
6 Development of the Distributed Cloud Computing Platform SlapOS
In this section we present the architecture aspects and main features of the SlapOS platform, which was chosen to build a cloud platform from the research presented in the previous section.
SlapOS architecture is based on the concept of Master and Slave, as shown in Figure 1, which will be detailed further in terms of software and the functionality of a platform for distributed cloud. Master nodes are central directory nodes cloud system, serving to allocate processes to Slave nodes and keep track of the situation of each slave node and software that are installed on each node. Slave nodes can be installed on any computer, both in data centers and in private networks, and their role is to install and run software processes.
SlapOS Software has a kernel consisting of a hierarchical architecture that is built on an POSIX operating system, and the following modules: SLAPGrid, Supervisord and Buildout [4], as shown in Figure 2.
SlapOS works based on the SLAP protocol, which is an acronym for "Simple Language for Accounting and Provisioning", as presented in Figure 3. It is independent of programming language, operating for experimental platform implemented by Python implemented SLAPGrid Slave node, and the corresponding Python module ERP5 SLAP Cloud Engine Master node.
After some time, a typical SlapOS Node will include multiple software applications and, for each software application, multiple instances, each of which running in a different process, as depicted in Figure 4.
As shown in Figure 5, a computing partition is assigned a dedicated user N (slapuserN) and a dedicated directory (/srv/slapgrid/slappartN) and several addresses on the network connection: global IPv6 address, an IPv4 private and emulated Ethernet interface (slaptapN). In addition, a public IPv4 address can be assigned, or a disk storage unit type (/dev/sdaN).
The development of Ubuntu alongside cloud networks was one of the reasons for choosing it as the operating system implementation slap nodes. Both versions of Ubuntu Server, the latest 13.10 12.04 LTS 64-bit and 64-bit servers were used to host cloud nodes.
As a general approach, we first installed a UNIX operating system, Ubuntu Linux is preferred, as was argued above. The next step is to configure network parameters and downloading sources for installation, after which will install the kernel modules: Buildout, Supervisord and SLAPGrid. Finally, we set up the partitions and will assign different applications to test the cloud system.
Table 1 presents the hardware specifications of the two servers Fujitsu Siemens and the 3 HP servers that were used to build private cloud system infrastructure presented in this paper.
For a SlapOS Slave node composed of several computing partitions we implemented the SLAP protocol to demonstrate: how to create folders on a slave node, the allocation of network interfaces to each partition, creating configuration files based on Buildout for allocation and instantiation of applications, controlling processes by Supervisord.
A typical commercial PC can usually provide 100 computing partitions, while the servers we deployed can contain easily 500 to 700 computing partitions.
SlapOS Master node is used for requesting an instance for various communications applications and will seek a free partition according to specified SLA parameters. SlapOS Slave node will install the chosen software on a free partition and start an instance of the application, and when it is no longer needed it will be deleted. Finally, we implement on the SlapOS Master the mechanisms for monitoring and metering of the resources consumed by the nodes SlapOS Slave processes.
At the extreme limit performance computing partition can include multiple instances of the same application, consuming cloud resources node. To increase performance, it is recommended that the principle of installing applications on each partition elementary calculation, which allows network expansion and optimization of resources due to their allocation granularity.
Below are some of the skills that IT departments need to develop both to benefit from the opportunities offered by the CC and to successfully overcome the challenges posed by the CC.
* Areas of competence for Cloud computing in the enterprise. The first aspect is the ability of organizations to standardize and compare economic models for IT services / Cloud. Organizations must be able to compare the cost structures of traditional IT model charging - sharing in the Cloud Computing i.e. processing capacity - CPU / hour and storage - GB / month. Hidden costs such as management, administration and transition costs, including stafftraining should also be introduced in the calculation. Gross price for Cloud Computing is very convincing at the moment but will be tempered by costs. Therefore, organizations need to assess as accurately as possible, the time required to recover the investment due to the adoption of Cloud Computing working environment;
* Creating effective prototypes and selection process of suppliers. Choosing a platform for cloud computing work term does not imply immediate advantages for organizations. Moreover, smart organizations will moderate their approach by creating prototypes proactively and a selection process that will help them in selecting the best set of services and tools for Cloud Computing. For example, early implementation of the non - critical applications can be a useful way to accommodate members of the working environment requirements and Cloud Computing possibility of integration with existing IT processes. This implementation can validate the business model and efficiency, furthermore, will provide an opportunity to explore the area surrounding Cloud Computing: performance issues, latency and other issues caused by distance moving services of the points are used, strengths and weaknesses of platforms used in prototypes or pilot applications;
* Technical ability to adopt and execute Cloud Computing services. After the economic efficiency has been achieved, the next challenge is the acquisition of Cloud Computing platforms Fluencies of technical, some thing that tends to bring attention to open technologies ( open source ) or web programming languages and models for applications less known. Architects, developers, testing, operational teams, security and network issues will have to adapt or be adapted to Cloud Computing. This process of adjustment will be provided with time and resources so that the core business not to be affected;
* Incorporation of Cloud Computing in IT strategic planning. Introducing Cloud Computing brings important questions as the extent to which this affects strategy implementation SOA (Service Oriented Architecture - SOA), impact on recovery plans in case of disaster, rescue and restoration policy and archiving policies legally mandated data, the risk profile for the use of CC and mitigation strategy, the risk of locking the platform and how can this be avoided . All these details need to be clarified given that CC is implemented applications and IT functions increasingly important.
The fundamental purpose of Cloud Computing technology is sharing resources and services, and the most important part of the implementation process is the management and assigning resources effectively [23]. Introducing economic element in question may reflect initially offer resources and dynamic changes of demand. When there is a significant change in the world of IT, implications are unclear, therefore organizations, especially large ones, tend to choose a precautionary attitude following a careful risk management. Occasionally, some of these changes determine the cost savings, operational enhancements and approaches to business issues with important strategic advantages. One of these changes is determined by the adoption of Cloud Computing working environment, as mentioned by the authors in [24] "The greater the benefit in one or more of these areas, the strategic advantage is more obvious, and this potentially impacts on a wider range ".
There are benefits that can influence the companies / organizations that choose to be pro - active in the management of potential risks. These benefits include access to completely different levels of scale and cost savings in terms of ability to prioritize quickly and also the capability to operate IT systems at a lower cost than previously possible. Changes in infrastructure management, including maintenance and updating (Cloud Computing providers offer extensive virtualization and accessibility of basic components to make them insensitive to replacement and upgrade), as also improved agility solutions and the possibility to choose between suppliers especially in the context in which interoperability is indisputable, are all benefits of implementing Cloud Computing.
We believe that the working environment Cloud Computing has great potential to generate change in mentality and doing business, whether these technologies are exploited within companies or in their relations with the world economic.
7 Conclusions
By running instances of applications as processes instead to create a virtual machine for each application, as other cloud systems such as Amazon AWS EC2 are implemented, we presented how SlapOS allows more efficient use of hardware resources.
In conclusion, SlapOS is the recommended platform for open source application developers to transform their applications to a distributed SaaS model, including their migration to IPv6.
The presented research study can be used for implementing distributed cloud platforms for different applications in areas such as agriculture, smart cities, expert systems, elearning platforms or neutrino radiation monitoring.
References
[1] G. Suciu and S. Halunga, "Cloud Content Distribution Networks for DVB Applications," Constanta Maritime University Annals, vol. 13, no. 18, pp. 205-208, 2012.
[2] G. Suciu, E. G. Ularu and R. Craciunescu, "Public versus Private Cloud Adoption - a Case Study based on Open Source Cloud Platforms," in 20th Tele communications Forum - TELFOR 2012, IEEE Communications Society, Belgrade, 2012.
[3] L. M. Vaquero, L. Rodero-Merino, J. Caceres and M. Lindner, "A break in the clouds: towards a cloud definition," ACM SIGCOMM Computer Communication Review, vol. 39, no. 1, pp. 50-55, 2008.
[4] G. Suciu, C. G. Cernat, G. Todoran, V. Suciu, V. A. Poenaru, T. L. Militaru and S. Halunga, "A solution for implementing resilience in open source Cloud platforms," in Proceedings of 2012 9th International Conference on Communications (COMM), IEEE Communications Society, Bucharest, 2012.
[5] I. Ivan, M. Doinea and D. Palaghita, "Aspects Concerning the Optimization of Authentication Process for Distributed Applications," Theoretical and Applied Economics, vol. 6, no. 6, pp. 39-56, 2008.
[6] K. Sundaresan, M. Arslan, S. Singh and S. Rangarajan, "FluidNet: a flexible cloud-based radio access network for small cells," in 19th annual international conference on Mobile computing & networking, Miami, Florida, USA, 2013.
[7] A. Vulpe, O. Fratu, A. Mihovska and R. Prasad, "A Multi-Carrier Scheduling Algorithm for LTE-Advanced," in Wireless Personal Multimedia Communications, 15th International Symposium on, Atlantic City, USA, 2013.
[8] Object Management Group (OMG), "Cloud Standards Wiki," [Online]. Available: http://cloudstandards. org/wiki/. [Accessed November 2013].
[9] L. Smith and I. Lipner, "Free Pool of IPv4 Address Space Depleted," Number Resource Organization, Montevideo, 2011.
[10] C. G. Cernat, V. A. Poenaru and G. Suciu, "Multimedia content distribution between core network routers using Peer-to-Peer (P2P)," in 19th Telecommunications Forum, TELFOR, Belgrade, 2011.
[11] I. Stoica, R. Morris, D. Liben-Nowell, D. R. Karger and M. F. Kaashoek, "Chord: a scalable peer-to-peer lookup protocol for internet applications," IEEE/ACM Transactions on Networking, vol. 11, no. 1, pp. 17-32, 2003.
[12] K. Walsh and E. G. Sirer, "Experience with an object reputation system for peer-topeer filesharing," in Proc. 3rd Symp. on Networked Systems Design and Implementation, 2006.
[13] NIST, "National Institute of Standards and Technology," [Online]. Available: www.nist.gov. [Accessed August 2013].
[14] P. Mell and T. Grance, "The NIST Definition of Cloud Computing - Recommendations of the National Institute of Standards and Technology," NIST, Gaithersburg, 2011.
[15] D. Kusic, J. O. Kephart, J. E. Hanson, N. Kandasamy and G. Jiang, "Power and performance management of virtualized computing environments via lookahead control," Cluster computing, vol. 12, no. 1, pp. 1-15, 2009.
[16] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg and I. Brandic, "Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility," Future Generation computer systems, vol. 25, no. 6, pp. 599-616, 2009.
[17] E. G. Ularu, F. C. Puican, G. Suciu, A. Vulpe and G. Todoran, "Mobile Computing and Cloud maturity - Introducing Machine Learning for ERP Configuration Automation," Informatica Economica, vol. 17, no. 1, pp. 40-52, 2013.
[18] M. Stokely, J. Winget, E. Keyes, C. Grimes and B. Yolken, "Using a market economy to provision compute resources across planet-wide clusters," in IEEE International Symposium on Parallel & Distributed Processing, IPDPS, Rome, 2009.
[19] A. Martian, R. Craciunescu, A. Vulpe, O. Fratu and I. Marghescu, "Perspectives on Dynamic Spectrum Access Pro cedures," in Wireless Personal Multimedia Communications, 15th International Symposium on, Atlantic City, USA, 2013.
[20] M. N. K. Boulos, B. Resch, D. N. Crowley and J. G. Breslin, "Crowdsourcing, citizen sensing and sensor web technologies for public and environmental health surveillance and crisis management: trends, OGC standards and application examples," Journal of health geographics, vol. 10, no. 67, pp. 1-29, 2011.
[21] B. Chihani, E. Bertin and N. Crespi, "Enhancing M2M communication with cloud-based context management," in IEEE 6th International Conference on Next Generation Mobile Applications, Services and Technologies (NGMAST).
[22] B. Chihani, E. Bertin, I. S. Suprapto, J. Zimmermann and N. Crespi, "Enhancing Existing Communication Services with Context Awareness," Journal of Computer Networks and Communications, 2012.
[23] J. Sills and J. Moore, "Science Propels a New Era of Retail Price Optimization," Revionics, 2013.
[24] D. Hinchcliffe, "Eight ways that Cloud computing will change business," 2009. [Online]. Available: http://www.zdnet.com/blog/hinchcliffe/e ight-ways-that-Cloud-computing-willchange- business/488. [Accessed November 2013].
George SUCIU1, Simona HALUNGA1, Anca APOSTU2, Alexandru VULPE1,
Gyorgy TODORAN1
1 Faculty of Electronics, Telecommunications and Information Technology,
University Politehnica of Bucharest, Romania
2 PhD. Student, Institute of Doctoral Studies Bucharest
[email protected], [email protected], [email protected], [email protected],
George SUCIU graduated from the Faculty of Electronics, Telecommunications and Information Technology at the University "Politehnica" of Bucharest in 2004. He holds a Master diploma in Informatics Project Management from the Faculty of Cybernetics, Statistics and Economic Informatics of the Academy of Economic Studies Bucharest from 2010 and currently, his PhD work is focused on the field of Electronics Engineering and Cloud Communications. Also he is IEEE member and has received a type D IPMA certification in project management from Romanian Project Management Association / IPMA partner organization. He is the author or co-author of over 30 journal articles and scientific papers at conferences. His scientific fields of interest include: project management, electronics and telecommunication, cloud computing, big data, open source, IT security, data acquisition and signal processing. Known languages: German, French, English; He also has experience in project leading and participation in various research projects (FP7, National Structural Funds), with more than 15 years activity in information and telecommunication systems.
Simona HALUNGA received the M.S. degree in electronics and telecommunications in 1988 and the Ph.D. degree in communications from the University Politehnica of Bucharest, Bucharest, Romania, in 1996. Between 1996-1997 she followed post-graduate courses in Management and Marketing, organized by the Romanian Trade and Industry Chamber and Politehnica University of Bucharest, in collaboration with Technical Hochschule Darmstadt, Germany, and in 2008- post-graduate courses in Project Management - Regional Centre for Continuous Education for Public Local Administration, Bucharest She has been Assistant Professor (1991-1996), Lecturer (1997 - 2001), Associate Professor (2001-2005) and from 2006 she is a full professor at in Politehnica University of Bucharest, Electronics, Telecommunications and Information Theory Faculty, Telecommunications Department. Between 1997-1999 she has been a Visiting Assistant Professor at Electrical and Computer Engineering Department, University of Colorado at Colorado Springs, USA. Her domain of interest are Multiple Access Systems & Techniques, Satellite Communications, Digital Signal Processing for Telecommunication, Digital Communications - Radio Data Transmissions, Analog and Digital Transmission Systems and Digital Signal Processing for Telecommunications.
Anca APOSTU has graduated The Academy of Economic Studies from Bucharest (Romania), Faculty of Cybernetics, Statistics and Economic Informatics in 2006. She has a Master diploma in Economic Informatics from 2010 and in present she is a Ph.D. Candidate in Economic Informatics with the Doctor's Degree Thesis: "Informatics solution in a distributed environment regarding unitary tracking of prices". Her scientific fields of interest include: Economics, Databases, Programming, Information Systems, Information Security, Distributed Systems, Cloud Computing and Big Data.
Alexandru VULPE received his B.Sc, M.Sc from the Faculty of Electronics, Telecommunications and Information Technology at the University "Politehnica" of Bucharest in 2009 and 2011, with both theses focused on interoperability between wireless access networks based on the IEEE 802.21 standard, which included software modeling in the C++ language. His main interests are focused on wireless access technologies, integrated telecommunication networks, optimization techniques, software development, mobile application development and his PhD work is in the area of heterogeneous wireless networks, 4G and beyond 4G networks. He is also a participant in several research projects focused on wireless sensor networks (SaRaT-IWSN, CORONA), with an activity related to wireless communications protocols development, simulations and testing and in test applications development.
yorgy TODORAN has graduated the Faculty of Electronics, Telecommunications at University "Politehnica" in Bucharest in 2000. He holds a Master degree in Quality Management (2001) and Strategic Management (2002) from the "Politehnica" University of Bucharest. Currently he is working on his Ph.D. thesis in security technologies with focus on open source, cloud computing, mobile and BYOD initiatives. He has more than 10 years experience in commercial and governmental telecommunication systems, mainly in system administration, system management, design, project management, consulting
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright INFOREC Association 2013
Abstract
The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications. [PUBLICATION ABSTRACT]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer