Edge computing is a new distributed computing model which brings data storage and processing closer to where it's needed, to speed up response times and improve bandwidth utilisation. In contrast to traditional computing models where applications are hosted in a central server and accessed from there, edge computing aims to disperse workstations across a network of local computers. This is accomplished via the use of virtualization and containers. Containers allow users to isolate specific workloads for better performance from a central server and to avoid the costs involved with maintaining multiple OS environments. They are useful for both software and hardware isolation and system management and virtualisation. Virtualisation lets users boot applications on virtual machines, thus greatly speeding up access times. The first implementation of distributed edge computing came from Sun Microsystems. It called it "Fugue" and later on known as OpenStack. There are many clouds offering a variety of these cloud-based tools and services, which are then installed via an installable utility. Some are free, others are subscription based. The primary advantage of this technology is the reduction in latency caused by network congestion. To understand this more thoroughly, consider how data is sent over a network, from client A to client B. At any point along this path, if traffic arrives late or arrives at all, some loss of data can occur. However, with edge computing, this is not an issue as traffic is always sent in the order it arrives. If you want to better understand the benefits of edge computing, see more here. Another advantage is that edge computing makes provisioning of data resources much easier. Say for example you want to have two programmers who need to write a large document. By deploying only the amount of resources required, these writers can work in collaboration without having to wait for network congestion to serve their request. This also reduces time-to-market, which is another positive for organizations looking to enhance their product quality. This is because instead of investing resources into purchasing new hardware and software and training staff to run new programs on existing servers, companies can simply provision servers and software on-demand. One major disadvantage to edge computing has been the concern that some businesses have had with respect to security. Controlling access to data has been a key issue for some companies, particularly with sensitive information. In response, many cloud providers have provided additional levels of security such as firewalls. Although most cloud providers insist that security is one of their highest priorities, there has been little to no impact so far regarding security breaches resulting from distributed computing. Get more detailed information about edge computing at hivecell.com. One thing that the future holds, however, is radical changes in IT infrastructure, specifically with respect to networking. Cloud providers may adopt a model where networks are not entirely dependent on the Internet and instead can be used to deliver services off-the-shelf (AS) or as a virtual service (SaaS). One emerging model, called metering, uses time slot based traffic to track user activity and distribute load among users. There is no immediate risk of network limitations but researchers expect that as cloud adoption increases, so will the potential for issues related to connectivity and security. If you want to know more about this topic, then click here: https://en.wikipedia.org/wiki/Cloud_computing.
0 Comments
Leave a Reply. |
|