Assignment #1

Posted by rizalinacadayona | 3:44 AM | 0 comments »

1.Explain the cirumstances under which a token-ring network is more effective than an Ethernet network.
Token ring local area network (LAN) technology is a local area network protocol which resides at the data link layer (DLL) of the OSI model. It uses a special three-byte frame called a token that travels around the ring. Token ring frames travel completely around the loop.Token ring specifies an optional medium access scheme allowing a station with a high-priority transmission to request priority access to the token.8 priority levels, 0-7, are used. When the station wishing to transmit receives a token or data frame with a priority less than or equal to the station's requested priority, it sets the priority bits to its desired priority. The station does not immediately transmit; the token circulates around the medium until it returns to the station. Upon sending and receiving its own data frame, the station downgrades the token priority back to the original priority.
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are fundamental differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size which depended on the medium used. For example, 10BASE5 coax cables had a maximum length of 500 meters (1,640 ft). Also, as was the case with most other high-speed buses, Ethernet segments had to be terminated with a resistor at each end. For coaxial-cable-based Ethernet, each end of the cable had a 50-ohm resistor attached. Typically this resistor was built into a male BNC or N connector and attached to the last device on the bus, or, if vampire taps were in use, to the end of the cable just past the last device.
2. Although security issues were not mentioned in this chapter, every network owner must consider them. Knowing that open networks all data to pass to every node, describe the posssible security concerns of open network achitectures. include the implicatiions of passing logon procedures, user IDs, and passwords openly on the network.
Securing network infrastructure is like securing possible entry points of attacks on a country by deploying appropriate defense. Computer security is more like providing means to protect a single PC against outside intrusion. The former is better and practical to protect the civilians from getting exposed to the attacks. The preventive measures attempt to secure the access to individual computers--the network itself--thereby protecting the computers and other shared resources such as printers, network-attached storage connected by the network. Attacks could be stopped at their entry points before they spread. As opposed to this, in computer security the measures taken are focused on securing individual computer hosts. A computer host whose security is compromised is likely to infect other hosts connected to a potentially unsecured network. A computer host's security is vulnerable to users with higher access privileges to those hosts.
3. Remembering the discussion of deadlocks, if you were designing a networked system, how would you manage the treat of deadlocks in your network? Consider all of the following: prevention, detection, avoidance, and recovery.

Deadlock refers to a specific condition when two or more processes are each waiting for another to release a resource, or more than two processes are waiting for resources in a circular chain .Deadlock is a common problem in multiprocessing where many processes share a specific type of mutually exclusive resource known as a software, or soft, lock. Computers intended for the time-sharing and/or real-time markets are often equipped with a hardware lock (or hard lock) which guarantees exclusive access to processes, forcing serialization. Deadlocks are particularly troubling because there is no general solution to avoid (soft) deadlocks.
Avoidance:Deadlock can be avoided if certain information about processes is available in advance of resource allocation. For every resource request, the system sees if granting the request will mean that the system will enter an unsafe state, meaning a state that could result in deadlock. The system then only grants requests that will lead to safe states. In order for the system to be able to figure out whether the next state will be safe or unsafe, it must know in advance at any time the number and type of all resources in existence, available, and requested. One known algorithm that is used for deadlock avoidance is the Banker's algorithm, which requires resource usage limit to be known in advance. However, for many systems it is impossible to know in advance what every process will request. This means that deadlock avoidance is often impossible.
Detection: Often neither deadlock avoidance nor deadlock prevention may be used. Instead deadlock detection and process restart are used by employing an algorithm that tracks resource allocation and process states, and rolls back and restarts one or more of the processes in order to remove the deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler or OS.Detecting the possibility of a deadlock before it occurs is much more difficult and is, in fact, generally undecidable, because the halting problem can be rephrased as a deadlock scenario. However, in specific environments, using specific means of locking resources, deadlock detection may be decidable. In the general case, it is not possible to distinguish between algorithms that are merely waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because of deadlock....
Prevention: Deadlocks can be prevented by ensuring that at least one of the following four conditions occur:
Removing the mutual exclusion condition means that no process may have exclusive access to a resource. This proves impossible for resources that cannot be spooled, and even with spooled resources deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms.
The "hold and wait" conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations); this advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to release all their resources before requesting all the resources they will need. This too is often impractical. (Such algorithms, such as
serializing tokens, are known as the all-or-none algorithms.)
A "no
preemption" (lockout) condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. (Note: Preemption of a "locked out" resource generally implies a rollback, and is to be avoided, since it is very costly in overhead.) Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control.
The
circular wait condition: Algorithms that avoid circular waits include "disable interrupts during critical sections" , and "use a hierarchy to determine a partial ordering of resources" (where no obvious hierarchy exists, even the memory address of resources has been used to determine ordering) and Dijkstra's solution.
Recovery: Once deadlock has been detected within a distributed system, there must be a way to recover from it.
Possible methods for recovery:
· Operator intervention At one time, this was a feasible alternative for uniprocessor systems. However, it has little value for today's distributed systems.
· Termination of Process(es) Some victim process (or set of processes) is chosen for termination from the cycle or knot of deadlocked processes. This process is terminated, requiring a later restart. All the resources allocated to this process are released, so that they may be reassigned to other deadlocked processes. With an appropriately chosen victim process, this should resolve the deadlock.
· Rolling Back Process(es) In order to rollback a victim process, there needs to have been some previous checkpoint at which time the state of the victim process was saved to stable storage. This requires extra overhead.
There must also be an assurance that the rolled back process is not holding the resources needed by the other deadlocked processes at that point. With an appropriately chosen victim process, needed resources will be released and assigned to the other deadlocked processes. This should resolve the deadlock.

4. Assuming you had sufficient funds to upgrade only one component for a system with which you are familiar, explain which component you would choose to upgrade to improve overall performance, and why?
A Central Processing Unit (CPU), is a description of a class of logic machines that can execute computer programs. This broad definition can easily be applied to many early computers that existed long before the term "CPU" ever came into widespread usage.Upgrade CPUs for the 486 was the AMD DX5-133 (aka PR75SSA among other names) which really was just an enhanced high speed 486. These were often built in a QFP package and soldered to a socket adapter with the needed multiplier and voltage regulator circuits. Other chips used the very powerful and capable Cyrix 5x86 CPU which was in reality a stripped 6x86 (Pentium class CPU). Since these CPUs ran at higher then normal voltages, and used extensive power regulators, they almost always had fans, some ran off the adapter its self, while others used various plugs to tap into drive connectors.

0 comments