Friday, November 15, 2019

Analysis of Docker Technology

Analysis of Docker Technology What is the technology? Docker is a software containerization platform. What does it do and how does it improve upon previous technologies? Docker allows users to run multiple different software packages, including multiple instances of the same piece of software within an isolated virtual container. The behaviour and features are similar to that of running a virtualized operating system, such as allowing isolation from the host machines operating system, the ability to run multiple instances of the same software package simultaneously and the storage of applications in a format that can be transferred between physical machines. Traditional virtualization hypervisors such Hyper-V, ESXi and Xen all rely on each virtualized instance to have their own complete operating system, drivers, libraries and software packages to be installed and running. Docker moves away from this method and instead provides an abstraction layer between the host operating systems kernel and the containerization application. The containerized applications are configured to share the same operating system and libraries. This removes the overhead of running multiple instances of these items reducing system resource utilization. In addition to the performance benefits, Docker maintains the security features provided by virtualization hypervisors. Docker containers are configured to use virtualized network interfaces allowing segregation, VLAN tagging and inter-container networking amongst other features. The Docker container files are self-contained allowing them to be transferred freely between different physical hardware without the need for reconfiguration. This has also led to multiple repositories of Docker containers to be created, allowing the public to upload and share pre-configured open-source software packages. How might it transform computers and devices using it? Tell us some scenarios. By converting from using a traditional virtualized operating based configuration, end users can increase the utilization by running more software on less physical hardware. This in turn will reduce hardware, energy and other related costs and improve efficiency when deploying new software instances. Web hosting services could increase the amount of services existing hardware could provide and increase efficiency when deploying new services. For example, each WordPress installation could be configured in individual containers while accessing a shared SQL database, rather than each installation requiring a full virtualized operating system to be deployed per instance. Software developers can also take advantage of Docker to assist with their development and deployment cycles. Software can be configured and optimized on developers local machines or development servers and be easily transferred to quality assurance teams and finally to production environments. Why is it interesting and important? Docker is an important step forward from traditional virtualization technology. The software has been developed under the open-source Apache License 2.0 allowing anyone to take part in development and to also freely use and modify any components for their own project both personal and commercial providing they follow the licensing requirements in their scenario. By consolidating existing infrastructure energy emissions will be reduced, reducing the carbon footprint of users. Other consumables used in certain operations can also be reduced, such as water in server farm cooling configurations and physical space used allowing more compact configurations. Management and maintenance of software packages can also be improved. If issues are found with particular software version updates, providing the previous container is kept the singular application can be rolled back rather than the entire operating system. What is the device? Ring an IoT connected doorbell What does it do? How would you use it? Tell us some scenarios. Ring is a doorbell with an integrated camera, microphone, speaker, motion sensor and is internet connected via WiFi. By connecting the doorbell to the internet it is able to alert the user via a smartphone app when the doorbell is rung or the motion sensor is triggered. The user can then check the video feed from the door, to determine who is at the door. In response, the user can then choose to activate the speaker function to speak with the person at the door using the smart device, similar to that of a traditional intercom system. The device also saves its video recordings to a cloud service allowing the footage to be viewed anywhere using a compatible smart device. The device can be used in a number of ways. If the user is expecting a parcel and is not at the address at the time of the delivery, they will be alerted on their smart device when the doorbell is rung. Once the user is alerted via their smart device, they can then activate the video feed to confirm who is at the door and then use the speaker to advise the courier to leave the parcel in a safe location. Home security can also be improved by using the device. The video recording functionality will be triggered with any motion near the front door, even if the doorbell is not rung. This footage will then be stored off-site via a cloud storage service. In the unfortunate event of a break in, the intruder will be unable to destroy the footage, which can then be used to assist authorities in subsequent investigations. In addition, some insurance providers may offer reduced insurance premiums when such devices are installed. Briefly outline the devices interesting software/hardware/networking. In what way does computer technology transform or realise the device? Ring is provided with a mobile application that allows the doorbell to be paired with the users iOS or Android based mobile device. The doorbell has an integrated WiFi adapter which is used to connected to the users home WiFi network to provide internet access to the device. This allows the doorbell to provide notifications to the smart device application regardless of if the user is at home on the same network or are located elsewhere, provided they have access to an internet connection. The doorbells integrated motion sensor and camera add further functionality previously not possible. The camera used has been selected for its low-light performance, in combination with infra-red LEDs to provide illumination to the recorded footage without any light being visible to the human eye. This enhances its ability to be an inconspicuous security device. Recorded footage is saved off-site using a cloud storage service. This is used by the mobile application to provide the user with the ability to watch footage while away from their local network and to provide an archive solution without requiring a large amount of local storage. Why is the device an interesting or important example of embodiment? As defined by Professor Tony Hey in his book The Computing Universe: A Journey through a Revolution Butler Lampsons third age of computing is about using computers for embodiment that is, using computers to interact with people in new and intelligent ways This is shown through the Ring doorbell, in the way it allows the user to connect with the outside world in a new way. It provides the user the ability The ring doorbell provides a new way for users to connect with the outside world. It removes the need for a person to be home to accept parcels, it tracks motion which can provide the user with a greater sense of security at home all through the adoption of technology. Week 2 What are the devices? Device 1: Smartwatch Device 2: PC Characterise the computing requirements of the two devices? Device 1: CPU A smartwatch requires a CPU(Central Processing Unit) to process all machine instructions provided by applications and the operating system. Most smartwatches use an ARM architecture CPU. Bluetooth Bluetooth is a networking protocol used for the smartwatch to communicate with the host device (usually a smartphone). NFC NFC (Near Field Communication) is a networking protocol used for communicating with external devices. This is commonly used in contactless payment systems GPS GPS (Global Positioning System) is geolocation system used to provide location data to the device. This is commonly used for maps and navigation systems Battery A custom made lithium-ion battery. Used to provide power to all the components in the device. To recharge the battery either a port is provided to connect the watch to a power source or wireless charging is implemented to provide from a wireless power source Display A display is used to provide visual interface for providing information to the user. Touch interface A touch interface (Also known as a digitizer) is used to allow the user to interact with the smartwatch by touching on the display. Touch screens are commonly used due to the limited space on a smartwatch for other methods of interfacing with the device, such as buttons. RAM RAM (Random Access Memory) is required for the CPU to store data while is processing instructions. RAM is volatile memory and is not used for persistent data storage. Persistent Storage Persistent storage is required to store the operating system, applications and user data. This is commonly a form of NAND flash memory due, as it offers compact storage with no moving parts which could be damaged in a device that is moved during operation. Speaker Speakers are used to provide aural feedback to the user. Microphone A Microphone is used to receive aural data from the users, for example a phone call will require the Sensors There are numerous sensors located on a smartwatch that each monitor a different function.Most smartwatches have an Accelerometer to monitor acceleration, a Barometer to measure atmospheric pressure, a Gyroscope to measure the angle of the device, a Heart Rate monitor to measure pulse and an ambient light sensor to determine the backlight of the screen. GPU The GPU (Graphics Processing Unit) is used to accelerate the creation of visual elements. This is commonly integrated as part of the CPU in smartwatches due to size constraints. WiFi WiF is a networking protocol used to transmit data in a local network. This is used in a smartwatch to provide network connectivity when the host device (e.g. smartphone) is not available. Device 2: CPU A PC requires a CPU(Central Processing Unit) to process all machine instructions provided by applications and the operating system. Most PCs use an x86 architecture CPU. RAM RAM (Random Access Memory) is required for the CPU to store data while is processing instructions. RAM is volatile memory and is not used for persistent data storage. Persistent Storage Persistent storage is required to store the operating system, applications and user data. This can be a mechanical hard disk drive, utilizing magnetic platters to store data or a solid state disk which uses NAND flash memory to store data. Network Adapter A network adapter is required to connect the PC to a local network. This can be achieved through a range of interfaces including a wired ethernet connection or a wireless WiFi connection. Some systems will have both options available. GPU The GPU (Graphics Processing Unit) is used to accelerate the creation of visual elements. This can either be integrated into the CPU or can be provided through a discrete graphics adapter, for enhanced performance. USB Ports Power supply A power supply is required to convert mains AC power into DC power required to power the individual PC components. Some PCs (such as laptop computers) may utilize a battery to provide an additional power source Video Ports Audio Ports C. Device 1 Device 2 CPU Physically bigger, more Powerful Can run hotter, active cooling x86 based Physically smaller, slower Must run cooler, no active Cooling Arm based STORAGE Space for multiple drives Mix of mechanical and solid state drives Raid capabilities Solid state storage Physical constraints Less storage NETWORK ADAPTORS Can use wireless or wired connections Must be wireless Wifi Nfc 2. Moores Law Why might Moores Law come to an end soon? Explain based on current technologies. Moores Law was originally conceived in 1965 when Intel co-founder Gordon Moore posted an article about microprocessors. In the article Moore observed that the number of transistors in integrated circuits doubles roughly every 12 months. After 10 years once more data had become available Moore updated his theory from 12 months to 24 months. Intels latest processors are built using a 14 nanometer manufacturing process, however production of Intels next generations of processors with 10 nanometer transistors has already been pushed back by a year. Intel have stated that this was not a one-off occurrence and that they are not able to continue to keep up with the rate they used to. This shows that Moores law is now coming to an end too. One main reason that Moores law slowing down and potentially coming to an end is that its not possible to continue to keep shrinking transistors, while maintaining a functional device at the rate that is required to continue improving at the rate theorized. As MOSFET transistors follows the principles of quantum mechanics, as the transistors shrinks it makes it harder to determine if it is in the 0 or 1 state. The electrons inside the transistor can travel through devices with little resistance, therefore as the transistors get smaller, the resistance also gets lower which eventually lead to quantum mechanical phenomenon described as tunnelling rendering MOSFET based transistors non-functional. https://www.technologyreview.com/s/601102/intel-puts-the-brakes-on-moores-law/ http://spectrum.ieee.org/semiconductors/devices/the-tunneling-transistor Discuss a new or future technology which might overcome these technological limitations. Instead of trying to find ways to prevent quantum tunneling in transistors, researchers are investigating a new transistor design called TFET or Tunneling Field Effect Transistors. This style of transistor is designed to manipulate when quantum tunneling occurs in a controlled manner. This allows for transistors to be produced at an even smaller scale than MOSFETS can be without quantum tunnelling becoming a negative side-effect. Another advantage of this technology is that has the potential to be implemented in place of MOSFETs without the need for technology that implements them to be completely redesigned due to the similarities between TFET and MOSFET transistors. http://berc.berkeley.edu/tunneling-field-effect-transistors-beyond-moores-law/ https://engineering.nd.edu/news-publications/pressreleases/more-energy-efficient-transistors-through-quantum-tunneling What might be the ramifications if Moores Law comes to an end or slows down? If Moores Law comes to an end or slows down, the rate at which processor performance improves will decrease. This would reduce the rate at which new technologies are developed and would slow innovation in field relying on technology. 3. Non Von Neumann Processors Investigate a non von Neuman processor such as a graphics processor, FPGA or signal processor. How and why is it different from a general purpose CPU such as you might find a phone or PC? An FPGA or Field-programmable gate array is a type of integrated circuit that can be digitally re-programmed after it has been manufactured, unlike for example, the logic within a microcontroller which is hardwired during manufacturing. This It allows the user to program custom digital circuits using hardware description language to suit their requirements. FGPAs are sold without any pre-programmed instructions and are instead sold based on the physical features of the FGPA such as how many logic gates or how much memory it has, making it a very flexible device. As FPGAs can be reprogrammed without any need to change the physical hardware, this lends them to being used heavily in development and prototyping environments. Developers can create and update the logic throughout the development process without the need to purchase new hardware each time a change needs to be made. This is different to hardware such as a x86 CPU which cannot be reprogrammed and only supports the provided instruction sets. http://download.springer.com.ezp01.library.qut.edu.au/static/pdf/116/bok%253A978-1-4302-6248-0.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-1-4302-6248-0token2=exp=1490752308~acl=%2Fstatic%2Fpdf%2F116%2Fbok%25253A978-1-4302-6248-0.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-1-4302-6248-0*~hmac=b61cb59b461de816fe408c9ab730e0d9cd6ab12d55885171f66b3c880d9aafaa 3-5 Week 3 OS 1 IBMs z/OS IBM z/OS is a operating system designed to be run solely on IBM mainframe computers. Mainframe computers are large, high-end computers designed specifically for processing large amounts of data, often used by large organizations. IBM describe the key features of the z/OS operating system as its stability, security and high availability (IBM, 2014, para. 1). OS 2 Embedded Linux Embedded Linux is a term used to cover the Linux operating system being used on embedded computer systems. These embedded systems are generally very low-end computers, designed to run a very specific and minimal set of software that are then embedded inside another product, for example they can be found in some models of washing machine to offer further functionality. Linux is described as being flexible and open (Siever et al., 2003, p.1-3) which offers developers to ability to customize it to their exact needs and requirements. Comparison and contrasting Both of these operation systems are designed to run very specific types of workloads. The z/OS mainframe operating system is designed to process and analyse large data sets to provide in-depth insight on the data (IBM, 2015). The operating system is designed to handle very high performance workloads and to run as quick and efficiently as possible. Embedded Linux operating systems are designed to run a very specific workload such a smart TVs interface with as minimal overheads as possible due to the hardware restrictions of the low-power systems that are used in most Embedded Linux implementations (Simmonds, 2015, p.1-12). Both systems are designed to run specific processes, however the z/OS operating system is designed to run processes on high-end hardware on a large scale, whereas the Embedded Linux operating system is most commonly used on low performance hardware on a small scale. Open Source Software Security/Flexibility Open source software gives users the option to modify and adapt software to their needs. As the entire source code is publicly available and the software can be adapted, used within another software package or re-released as a different product, depending on the license type the original developer released the software under (Open Source Initiative, 2016). This also provides security to users, as they can audit the code themselves for security issues and if required patch the source code directly, rather than relying on a third party to find and resolve any potential issues. Cost Licenses for closed source commercial operating systems can range from a few hundred dollars up to thousands of dollars per installation (Microsoft, 2016). This can become very expensive for businesses that rely on a large amount of physical and virtualized operating systems. Open source software has no licensing costs associated with it, which can significantly reduce licensing costs, depending on the use case. This is also applicable to embedded platforms, which are generally designed to have a low cost per unit. Open source software can remove software and operating system licensing costs, helping to maintain a low cost per unit. Operating System Arch Linux a lightweight and flexible LinuxÂÂ ® distribution (Arch Linux, 2017) How are new features developed? New features developed in two main ways. The first of which is by the individual package developers, for example new features to the Netcat package will be developed by the Netcat developer community. Arch Linux package maintainers are then responsible for packaging new releases for the Arch Linux operating system and adding them to the Arch Linux package repository. The second way features are developed are by the Arch Linux developer team (Arch Linux, 2017). The features they develop range from developing and implementing software developed specifically for the operating system, to configuration and modification of third party packages and managing what packages are included and how they are used in the base operating system installation. How do new features make their way into a release? Arch Linux doesnt follow a traditional fixed release cycle, rather it employs a rolling release model (Arch Wiki, 2017) which allows individual components to be updated as soon as they are deemed ready. Packages are updated as soon as the maintainer has deemed the package stable and ready for release, after which it is upload and added to the repository. This model aims to remove the stagnation between fixed releases and instead aims to keep all at their latest releases. References: Arch Linux. (2017). A simple, lightweight distribution. Retrieved March 23, 2017, from https://www.archlinux.org/ Arch Linux. (2017). Arch Linux Developers. Retrieved March 23, 2017, from https://www.archlinux.org/people/developers/ Arch Wiki. (2017). Frequently asked questions. Retrieved March 23, 2017, from https://wiki.archlinux.org/index.php/Frequently_asked_questions IBM. (2014). Mainframe operating system: z/OS. Retrieved March 23, 2017, from https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_opsyszosintro.htm IBM. (2015). IBM z/OS Fueling the digital enterprise. Retrieved March 23, 2017, from https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=cainfotype=ansupplier=897letternum=ENUS215-267 Microsoft. (2016). Pricing and licensing for Windows Server 2016. Retrieved March 23, 2017, from https://www.microsoft.com/en-au/cloud-platform/windows-server-pricing Open Source Initiative. (2016). Licenses and Standards. Retrieved March 23, 2017, from https://opensource.org/licenses Siever, E., Weber, A., Figgins, S. (2003). Linux in a nutshell (4th ed.) Sebastopol, CA: OReilly. Simmonds, C. (2015). Mastering embedded linux programming (1st ed.). GB: Packt Publishing. Week 4 Network 1 WiFi WiFi (also known as Wireless LAN or WLAN) is a network technology designed as a replacement to LAN cabling and is developed around the IEEE 802.11 specification. The IEEE 802.11 specification is the standard dictating the use of the 802.11 radio frequency that WiFi uses to transmit data wirelessly (Tjensvold, 2007). Within the 802.11 specification there are a range of protocols that have been developed, with the current standard being the 802.11ac revision. This specification has support for speeds over 1Gb/s, depending on the antenna configuration in use. The range of a WiFi signal is generally quite short at approximately 20-25 metres depending on obstructions. This makes it good for use in home and business environments where access points can be installed where WiFi signal is required, but makes it a poor choice for larger area networks, such as mobile phone data. WiFi power usage is split between the access point and the client receiving the data. The access point uses significa ntly more power to broadcast the signal than the client device needs to receive it (Zeng, 2014). The latency provided by modern WiFi specifications, such as the 802.11ac revision offers low latency communication between clients and access points. The exact latency the client will note, will be dependant on the band being used (either 2.4GHz or 5Ghz in the case of 802.11ac), obstructions and the amount of antenna in use on the access point. Security of WiFi networks is dependant on how they are configured. A basic home configuration using outdated security technologies such as WEP or WPA1 to authenticate users is at risk of unauthorized users gaining access to the network. WPA2 authentication offers a stronger level of security by implementing the AES-CCMP algorithm. WiFi networks can also be vulnerable to MITM (Man in the middle) attacks, where a potential attacker can attempt to spoof the WiFi network, which clients may unsuspectingly connect to which will then allow the attacker to see any traffic of the connected clients. The effectiveness of this type of attack can be counte racted by ensuring traffic is being transmitted over secure protocols such as HTTPS and SSH, which will render the intercepted data unreadable (Joshi, 2009). Network 2 Bluetooth 4 and Bluetooth Low Energy (BLE) Bluetooth 4 is a short range network technology developed by the Bluetooth Special Interest Group. Bluetooth 4 covers a range of specifications including Bluetooth low energy, Bluetooth High speed and Classic bluetooth. Bluetooth is used for short range personal area (PAN) and ad-hoc networks, primarily in portable devices such as smart phones. Bluetooth devices are classified into 3 classes, depending on the transmission power of the device and the intended usable range. Class 1 devices have 100 mW transmission power and are designed to be used at ranges of up to 100 meters, class 2 devices have 2.5 mW transmission power and are designed for use at up to 10 meters and class 3 devices have 1 mW of transmit power and are only usable at ranges of under 10 meters. Class 1 and 2 are the most commonly used types, with class 1 devices generally being used in desktops, laptops and other devices with a large battery or mains connected power supply. Class 2 devices are generally used in porta ble devices such as smart phones, IoT connected devices and wireless headsets. Class 2 still allows for a usable range while keeping power usage to a minimum (Wright, 2007). The Bluetooth specification has 4 different security modes in which devices can operate. The security mode in which the device will operate will be selected based on the Bluetooth standard in use on both devices. Bluetooth 2.1 and later devices have a mandatory requirement to use security mode 4, proving both devices support it. Service mode 4 forces encryption for all services, providing security for all communications except for service discovery (Padgette, 2012). Compare and contrast fibre optic and wireless technologies within the context of a National Broadband Network (NBN) for Australia. Fibre Optic (FTTP) The National Broadband Network (NBN) provides a range of connection types with fibre optic technology being utilised in multiple service types including fibre to the premises (FTTP), fibre to the node (FTTN) and fibre to the distribution point (FTTdp) (NBN, 2017). Fibre optic connections use a optical fibre cable that uses light to transmit data. This type of cable transmits data faster, further and with a lower latency than the traditional copper cable which transmits data by electrical impulses. As this technology relies on a physical connection to the premises it is not practical to utilize this technology for remote locations, however for areas with higher population densities supplying broadband via FTTP is more practical as the cost per premises is decreased and reduces load on wireless services. Fibre optic cable is not affected by signal degradation as significantly as copper cabling and is therefore able transmit data across long distances more effectively . As the cable transmits data by light pulses the cable is resistant to any noise and ground vibrations interrupting or degrading the signal. Fibre optic cable is also able to supply much higher bandwidth connections (Malaney, 2010), with NBN already offering 1Gbps products to service providers, although this product is not currently being on-sold to consumers due to factors including demand and pricing accordin

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.