Category Archives: Computer

Computer speakers

Computer speakers are speakers external to a computer. Speakers contain amplifiers which vibrate to produce the sound. They come in many different forms. Some speakers are already attached to a computer. Some speakers are wireless. They work by Bluetooth.

Computer speakers, are speakers sold for use with computers, although usually capable of other audio uses, e.g. for an MP3 player. Most such speakers have an internal amplifier and consequently require a power source, which may be by a mains power supply often via an AC adapter, batteries, or a USB port (able to supply no more than 2.5W DC, 500mA at 5V). The signal input connector is often a 3.5 mm jack plug (usually color-coded lime green per the PC 99 standard); RCA connectors are sometimes used, and a USB port may supply both signal and power (requiring additional circuitry, and only suitable for use with a computer). Battery-powered wireless Bluetooth speakers require no connections at all. Most computers have speakers of low power and quality built in; when external speakers are connected they disable the built-in speakers. Altec Lansing claims to have created the computer speaker market in 1990.

Computer speakers range widely in quality and in price. Computer speakers sometimes packaged with computer systems are small, plastic, and have mediocre sound quality. Some computer speakers have equalization features such as bass and treble controls. More sophisticated computer speakers can have a subwoofer unit, to enhance bass output. The larger subwoofer enclosure usually contains the amplifiers for the subwoofer and the left and right speakers. Some computer displays have rather basic speakers built-in. Laptop computers have built-in integrated speakers, usually small and of restricted sound quality to conserve space.

Instead of using a computer speaker for better sound, a computer can be connected to any external sound system, typically a high-power high-quality setup. An unusual design by HiWave Technologies, the DyadUSB USB-powered stereo audio amplifier module used in the SoundScience QSB 30W Portable USB Speakers allows a USB-powered and driven stereo speaker pair to supply 30W of power for short periods with a signal that has short high-power peaks and much lower average power, as most music and speech does. It stores energy from the USB connection during quieter periods, delivering high power for the peaks. (With a constant sine-wave input, power output cannot exceed the 2.5W that any USB speaker can deliver). The module is claimed to require less power most of the time, increasing laptop computer battery endurance, and delivering clean, unclipped sound peaks.

Monitor Computer

computer monitor is an output device which displays the information in pictorial form. A monitor usually comprises the display device, circuitry, casing, and power supply. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) with LED backlighting having replaced cold-cathode fluorescent lamp (CCFL) backlighting. Originally, computer monitors were used for data processing while television receivers were used for entertainment. From the 1980s onwards, computers (and their monitors) have been used for both data processing and entertainment, while televisions have implemented some computer functionality.

Early electronic computers were fitted with a panel of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the ‘monitor’. As early monitors were only capable of displaying a very limited amount of information, and were very transient, they were rarely considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program’s operation. As technology developed engineers realized that the output of a CRT display was more flexible than a panel of light bulbs and eventually, by giving control of what was displayed to the program itself, the monitor itself became a powerful output device in its own right.

On two-dimensional display devices such as computer monitors the display size or view able image size is the actual amount of screen space that is available to display a picture, video or working space, without obstruction from the case or other aspects of the unit’s design. The main measurements for display devices are: width, height, total area and the diagonal.

The size of a display is usually by monitor manufacturers given by the diagonal, i.e. the distance between two opposite screen corners. This method of measurement is inherited from the method used for the first generation of CRT television, when picture tubes with circular faces were in common use. Being circular, it was the external diameter of the glass envelope that described their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangular image was smaller than the diameter of the tube’s face (due to the thickness of the glass). This method continued even when cathode ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size, and was not confusing when the aspect ratio was universally 4:3.

With the introduction of flat panel technology, the diagonal measurement became the actual diagonal of the visible display. This meant that an eighteen-inch LCD had a larger visible area than an eighteen-inch cathode ray tube. The estimation of the monitor size by the distance between opposite corners does not take into account the display aspect ratio, so that for example a 16:9 21-inch (53 cm) widescreen display has less area, than a 21-inch (53 cm) 4:3 screen. The 4:3 screen has dimensions of 16.8 in × 12.6 in (43 cm × 32 cm) and area 211 sq in (1,360 cm2), while the widescreen is 18.3 in × 10.3 in (46 cm × 26 cm), 188 sq in (1,210 cm2).

Keyboard

In computing, a computer keyboard is a typewriter-style device which uses an arrangement of buttons or keys to act as a mechanical lever or electronic switch. Following the decline of punch cards and paper tape, interaction via teleprinter-style keyboards became the main input device for computers. A keyboard typically has characters engraved or printed on the keys (buttons) and each press of a key typically corresponds to a single written symbol.

Despite the development of alternative input devices, such as the mouse, touchscreen, pen devices, character recognition and voice recognition, the keyboard remains the most commonly used device for direct (human) input of alphanumeric data into computers. In normal usage, the keyboard is used as a text entry interface to type text and numbers into a word processor, text editor or other programs. In a modern computer, the interpretation of key presses is generally left to the software. A computer keyboard distinguishes each physical key from every other and reports all key presses to the controlling software. Keyboards are also used for computer gaming, either with regular keyboards or by using keyboards with special gaming features, which can expedite frequently used keystroke combinations. A keyboard is also used to give commands to the operating system of a computer, such as Windows’ Control-Alt-Delete combination, which brings up a task window or shuts down the machine. A command-line interface is a type of user interface operated entirely through a keyboard, or another device doing the job of one.

There are a number of different arrangements of alphabetic, numeric, and punctuation symbols on keys. These different keyboard layoutsarise mainly because different people need easy access to different symbols, either because they are inputting text in different languages, or because they need a specialized layout for mathematics, accounting, computer programming, or other purposes. The United States keyboard layout is used as default in the currently most popular operating systems: Windows, Mac OS X and Linux. The common QWERTY-based layout was designed early in the era of mechanical typewriters, so its ergonomics were compromised to allow for the mechanical limitations of the typewriter.

As the letter-keys were attached to levers that needed to move freely, inventor Christopher Sholes developed the QWERTY layout to reduce the likelihood of jamming. With the advent of computers, lever jams are no longer an issue, but nevertheless, QWERTY layouts were adopted for electronic keyboards because they were widely used. Alternative layouts such as the Dvorak Simplified Keyboard are not in widespread use. The QWERTZ layout is widely used in Germany and much of Central Europe. The main difference between it and QWERTY is that Y and Z are swapped, and most special characters such as brackets are replaced by diacritical characters.

Alphabetical, numeric, and punctuation keys are used in the same fashion as a typewriter keyboard to enter their respective symbol into a word processing program, text editor, data spreadsheet, or other program. Many of these keys will produce different symbols when modifier keys or shift keys are pressed. The alphabetic characters become uppercase when the shift key or Caps Lock key is depressed. The numeric characters become symbols or punctuation marks when the shift key is depressed. The alphabetical, numeric, and punctuation keys can also have other functions when they are pressed at the same time as some modifier keys. The Space bar is a horizontal bar in the lowermost row, which is significantly wider than other keys. Like the alphanumeric characters, it is also descended from the mechanical typewriter. Its main purpose is to enter the space between words during typing. It is large enough so that a thumb from either hand can use it easily. Depending on the operating system, when the space bar is used with a modifier key such as the control key, it may have functions such as resizing or closing the current window, half-spacing, or backspacing. In computer games and other applications the key has myriad uses in addition to its normal purpose in typing, such as jumping and adding marks to check boxes. In certain programs for playback of digital video, the space bar is used for pausing and resuming the playback.

 

Mouse

A computer mouse is an input device that is most often used with a personal computer. Moving a mouse along a flat surface can move the on-screen cursor to different items on the screen. Items can be moved or selected by pressing the mouse buttons (called clicking). It is called a computer mouse because of the wire that connects the mouse to the computer. The people who designed the first computer mice thought that it looked like the tail on a mouse. Today, many computer mice use wireless technology and have no wire.

In 1964 Douglas Engelbart (1925-2013), a researcher at Stanford Research Institute, wanted to find a way to make using computers easier. In those days, computers were large and expensive. Using them was very hard because everything had to be typed in by hand, and there was no way to alter things if you made a mistake. After studying and designing for a long time, Engelbart succeeded in inventing an input device which he named ‘XY index’. At first, it needed two hands to use, but it was changed so that only one hand was needed to use it. This model was more like the mouse that we use today. Xerox Palo Alto Research introduced a GUI in 1981, using a mouse. The mouse was used with Macintosh of Apple Inc. when it came out in 1984. Microsoft Windows also used the mouse when it came out, so over time computer mice became used with many computers. Modern mice have three buttons: left button, right button, scroll button.

A mouse typically controls the motion of a pointer in two dimensions in a graphical user interface (GUI). The mouse turns movements of the hand backward and forward, left and right into equivalent electronic signals that in turn are used to move the pointer.

The relative movements of the mouse on the surface are applied to the position of the pointer on the screen, which signals the point where actions of the user take place, so hand movements are replicated by the pointer. Clicking or hovering (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called “icons” and other elements. For example, a text file might be represented by a picture of a paper notebook and clicking while the cursor hovers this icon might cause a text editing program to open the file in a window.

Different ways of operating the mouse cause specific things to happen in the GUI:

  • Click: pressing and releasing a button.
    • (left) Single-click: clicking the main button.
    • (left) Double-click: clicking the button two times in quick succession counts as a different gesture than two separate single clicks.
    • (left) Triple-click: clicking the button three times in quick succession counts as a different gesture than three separate single clicks. Triple clicks are far less common in traditional navigation.
    • Right-click: clicking the secondary button, or clicking with two fingers. (This brings a menu with different options depending on the software)
    • Middle-click: clicking the tertiary button.
  • Drag and drop: pressing and holding a button, then moving the mouse without releasing. (Using the command “drag with the right mouse button” instead of just “drag” when one instructs a user to drag an object while holding the right mouse button down instead of the more commonly used left mouse button.)
  • Mouse button chording (a.k.a. Rocker navigation).
    • Combination of right-click then left-click.
    • Combination of left-click then right-click or keyboard letter.
    • Combination of left or right-click and the mouse wheel.
  • Clicking while holding down a modifier key.
  • Moving the pointer a long distance: When a practical limit of mouse movement is reached, one lifts up the mouse, brings it to the opposite edge of the working area while it is held above the surface, and then replaces it down onto the working surface. This is often not necessary, because acceleration software detects fast movement, and moves the pointer significantly faster in proportion than for slow mouse motion.
  • Multi-touch: this method is similar to a multi-touch trackpad on a laptop with support for tap input for multiple fingers, the most famous example being the Apple Magic Mouse.

Computer virus

A computer virus is a type of malicious software program (“malware”) that, when executed, replicates itself by modifying other computer programs and inserting its own code. Infected computer programs can include as well, data files, or the “boot” sector of the hard drive. When this replication succeeds, the affected areas are then said to be “infected” with a computer virus.

Virus writers use social engineering deceptions and exploit detailed knowledge of security vulnerabilities to initially infect systems and to spread the virus. The vast majority of viruses target systems running Microsoft Windows, employing a variety of mechanisms to infect new hosts, and often using complex anti-detection/stealth strategies to evade antivirus software. Motives for creating viruses can include seeking profit (e.g., with ransomware), desire to send a political message, personal amusement, to demonstrate that a vulnerability exists in software, for sabotage and denial of service, or simply because they wish to explore cybersecurity issues, artificial lifeand evolutionary algorithms.

Computer viruses currently cause billions of dollars’ worth of economic damage each year, due to causing system failure, wasting computer resources, corrupting data, increasing maintenance costs, etc. In response, free, open-source antivirus tools have been developed, and an industry of antivirus software has cropped up, selling or freely distributing virus protection to users of various operating systems. As of 2005, even though no currently existing antivirus software was able to uncover all computer viruses (especially new ones), computer security researchers are actively searching for new ways to enable antivirus solutions to more effectively detect emerging viruses, before they have already become widely distributed.

The term “virus” is also commonly, but erroneously, used to refer to other types of malware. “Malware” encompasses computer viruses along with many other forms of malicious software, such as computer “worms”, ransomware, trojan horses, keyloggers, rootkits, spyware, adware, malicious Browser Helper Object (BHOs) and other malicious software. The majority of active malware threats are actually trojan horse programs or computer worms rather than computer viruses. The term computer virus, coined by Fred Cohen in 1985, is a misnomer.[17] Viruses often perform some type of harmful activity on infected host computers, such as acquisition of hard disk space or central processing unit (CPU) time, accessing private information, corrupting data, displaying political or humorous messages on the user’s screen, spamming their e-mail contacts, logging their keystrokes, or even rendering the computer useless. However, not all viruses carry a destructive “payload” and attempt to hide themselves—the defining characteristic of viruses is that they are self-replicating computer programs which modify other software without user consent.A memory-resident virus (or simply “resident virus”) installs itself as part of the operating system when executed, after which it remains in RAM from the time the computer is booted up to when it is shut down.

Resident viruses overwrite interrupt handling code or other functions, and when the operating system attempts to access the target file or disk sector, the virus code intercepts the request and redirects the control flow to the replication module, infecting the target. In contrast, a non-memory-resident virus (or “non-resident virus”), when executed, scans the disk for targets, infects them, and then exits.

Many common applications, such as Microsoft Outlook and Microsoft Word, allow macro programs to be embedded in documents or emails, so that the programs may be run automatically when the document is opened. A macro virus (or “document virus”) is a virus that is written in a macro language, and embedded into these documents so that when users open the file, the virus code is executed, and can infect the user’s computer. This is one of the reasons that it is dangerous to open unexpected or suspicious attachments in e-mails. While not opening attachments in e-mails from unknown persons or organizations can help to reduce the likelihood of contracting a virus, in some cases, the virus is designed so that the e-mail appears to be from a reputable organization.

System Correlates Recorded Speech with Images

Speech recognition systems, such as those that convert speech to text on cellphones, are generally the result of machine learning. A computer pores through thousands or even millions of audio files and their transcriptions, and learns which acoustic features correspond to which typed words.

But transcribing recordings is costly, time-consuming work, which has limited speech recognition to a small subset of languages spoken in wealthy nations.

Visual semantics

The version of the system reported in the new paper doesn’t correlate recorded speech with written text; instead, it correlates speech with groups of thematically related images. But that correlation could serve as the basis for others. If, for instance, an utterance is associated with a particular class of images, and the images have text terms associated with them, it should be possible to find a likely transcription of the utterance, all without human intervention. Similarly, a class of images with associated text terms in different languages could provide a way to do automatic translation.

Conversely, text terms associated with similar clusters of images, such as, say, “storm” and “clouds,”  could be inferred to have related meanings. Because the system in some sense learns words’ meanings — the images associated with them — and not just their sounds, it has a wider range of potential applications than a standard speech recognition system. To test their system, the researchers used a database of 1,000 images, each of which had a recording of a free-form verbal description associated with it. They would feed their system one of the recordings and ask it to retrieve the 10 images that best matched it. That set of 10 images would contain the correct one 31 percent of the time.

The researchers trained their system on images from a huge database built by Torralba; Aude Oliva, a principal research scientist at CSAIL; and their students. Through Amazon’s Mechanical Turk crowdsourcing site, they hired people to describe the images verbally, using whatever phrasing came to mind, for about 10 to 20 seconds.

Merging modalities

To build their system, the researchers used neural networks, machine-learning systems that approximately mimic the structure of the brain. Neural networks are composed of processing nodes that, like individual neurons, are capable of only very simple computations but are connected to each other in dense networks. Data is fed to a network’s input nodes, which modify it and feed it to other nodes, which modify it and feed it to still other nodes, and so on. When a neural network is being trained, it constantly modifies the operations executed by its nodes in order to improve its performance on a specified task.

The researchers’ network is, in effect, two separate networks: one that takes images as input and one that takes spectrograms, which represent audio signals as changes of amplitude, over time, in their component frequencies. The output of the top layer of each network is a 1,024-dimensional vector — a sequence of 1,024 numbers. The final node in the network takes the dot product of the two vectors. That is, it multiplies the corresponding terms in the vectors together and adds them all up to produce a single number. During training, the networks had to try to maximize the dot product when the audio signal corresponded to an image and minimize it when it didn’t.

For every spectrogram that the researchers’ system analyzes, it can identify the points at which the dot-product peaks. In experiments, those peaks reliably picked out words that provided accurate image labels — “baseball,” for instance, in a photo of a baseball pitcher in action, or “grassy” and “field” for an image of a grassy field. In ongoing work, the researchers have refined the system so that it can pick out spectrograms of individual words and identify just those regions of an image that correspond to them.

 

Computer network

A computer network or data network is a digital telecommunications network which allows nodes to share resources. In computer networks, networked computing devices exchange data with each other using a data link. The connections between nodes are established using either cable media or wireless media.

Network computer devices that originate, route and terminate the data are called network nodes. Nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered (i.e. carried as payload) over other more general communications protocols. This formidable collection of information technology requires skilled network management to keep it all running reliably.

Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers, printers, and fax machines, and use of email and instant messagingapplications as well as many others. Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network’s size, topology and organizational intent. The best-known computer network is the Internet.

Computer networking may be considered a branch of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines.

A computer network facilitates interpersonal communications allowing users to communicate efficiently and easily via various means: email, instant messaging, online chat, telephone, video telephone calls, and video conferencing. A computer network may be used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack.

  • Network Packet

Computer communication links that do not support packets, such as traditional point-to-point telecommunication links, simply transmit data as a bit stream. However, most information in computer networks is carried in packets. A network packet is a formatted unit of data carried by a packet-switched network. In packet networks, the data is formatted into packets that are sent through the network to their destination. Once the packets arrive they are reassembled into their original message. With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link isn’t overused. Packets consist of two kinds of data: control information, and user data. The control information provides data the network needs to deliver the user data, for example: source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload datain between. Often the route a packet needs to take through a network is not immediately available. In that case the packet is queued and waits until a link is free.

    • Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation helps minimize interference and distortion. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
    • ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network
    • Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.Network topologyNetwork links

      The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cable (Ethernet, HomePNA, power line communication, G.hn), optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer.

      Wired technologies

      Fiber optic cables are used to transmit light from one computer/network node to another

      The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed.

    • An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea cables to interconnect continents.

    Price is a main factor distinguishing wired- and wireless-technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.

    Network interfaces[edit]

    A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry. In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller’s permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.

Computer architecture

In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In other definitions computer architecture involves instruction set architecture design, logic design, and implementation.

The discipline of computer architecture has three main subcategories:

  1. Instruction Set Architecture, or ISA. The ISA defines the machine code that a processor reads and acts upon as well as the word size, memory address modes, processor registers, and data type.
  2. Microarchitecture, or computer organization describes how a particular processor will implement the ISA. The size of a computer’s CPU cache for instance, is an issue that generally has nothing to do with the ISA.
  3. System Design includes all of the other hardware components within a computing system. These include:
    1. Data processing other than the CPU, such as direct memory access (DMA)
    2. Other issues such as virtualization, multiprocessing, and software features

Computer organization helps optimize performance-based products. For example, software engineers need to know the processing power of processors. They may need to optimize software in order to gain the most performance for the lowest price. This can require quite detailed analysis of the computer’s organization. For example, in a SD card, the designers might need to arrange the card so that the most data can be processed in the fastest possible way.

Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while virtual machines may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of running a virtual machine needs virtual memory hardware so that the memory of different virtual computers can be kept separated. Computer organization and features also affect power consumption and processor cost.

The exact form of a computer system depends on the constraints and goals. Computer architectures usually trade off standards, power versus performance, cost, memory capacity, latency (latency is the amount of time that it takes for information from one node to travel to the source) and throughput. Sometimes other considerations, such as features, size, weight, reliability, and expandability are also factors.

The most common scheme does an in depth power analysis and figures out how to keep power consumption low, while maintaining adequate performance.

Performance

Modern computer performance is often described in IPC (instructions per cycle). This measures the efficiency of the architecture at any clock frequency. Many people used to measure a computer’s speed by the clock rate (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have greater performance. As a result, manufacturers have moved away from clock speed as a measure of performance. Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs. There are two main types of speed: latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (like when the disk drive finishes moving some data).

Power efficiency

Power efficiency is another important measurement in modern computers. A higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt). Modern circuits have less power required per transistor as the number of transistors per chip grows. This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it. However the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible. In the world of embedded computers, power efficiency has long been an important goal next to throughput and latency.

Shifts in market demand

Increases in publicly released refresh rates have grown slowly over the past few years, with respect to vast leaps in power consumption reduction and miniaturization demand. This has led to a new demand for longer battery life and reductions in size due to the mobile technology being produced at a greater rate. This change in focus from greater refresh rates to power consumption and miniaturization can be shown by the significant reductions in power consumption, as much as 50%, that were reported by Intel in their release of the Haswell (microarchitecture); where they dropped their power consumption benchmark from 30-40 watts down to 10-20 watts. Comparing this to the processing speed increase of 3 GHz to 4 GHz (2002 to 2006) it can be seen that the focus in research and development are shifting away from refresh rates and moving towards consuming less power and taking up less space.

Services computing

Services Computing has become a cross-discipline that covers the science and technology of bridging the gap between business services and IT services. The underneath breaking technology suite includes Web services and service-oriented architecture (SOA), cloud computing, business consulting methodology and utilities, business process modeling, transformation and integration. This scope of Services Computing covers the whole life-cycle of services innovation research that includes business componentization, services modeling, services creation, services realization, services annotation, services deployment, services discovery, services composition, services delivery, service-to-service collaboration, services monitoring, services optimization, as well as services management. The goal of Services Computing is to enable IT services and computing technology to perform business services more efficiently and effectively.

Computer Support Services, Inc., or CSSI, is a multi-national company providing technology solutions and professional services. The company is best known for releasing CoreIntegrator Workflow, a Workflow/Business Process Management (BPM) technology suite. Computer Support Services, Inc. (CSSI) is a Microsoft Silver Certified Partner and a re-seller of Microsoft Dynamics GP, formerly known as Great Plains, a Platinum Partner of Intermec and a Gold Partner of Motorola Solutions providing supply chain solutions CSSI headquarters is located in Lewisburg, Pennsylvania, along with offices in Pittsburgh and Wyomissing. CSSI is the parent company of CSSI Global Technologies located in Bangalore, India. In 2007, Inc. 5000 awards recognized CSSI as one of the top 5,000 fastest growing companies in the United States.

In telecommunication, a telecommunications service is a service provided by a telecommunications provider, or a specified set of user-information transfer capabilities provided to a group of users by a telecommunications system. The telecommunications service user is responsible for the information content of the message. The telecommunications service provider has the responsibility for the acceptance, transmission, and delivery of the message. For purposes of regulation by the Federal Communications Commission under the U.S. Communications Act of 1934 and Telecommunications Act of 1996, the definition of telecommunications service is “the offering of telecommunications for a fee directly to the public, or to such classes of users as to be effectively available directly to the public, regardless of the facilities used.” Telecommunications, in turn, is defined as “the transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received.”

In computer networking, a network service is an application running at the network application layer and above, that provides data storage, manipulation, presentation, communication or other capability which is often implemented using a client-server or peer-to-peer architecture based on application layer network protocols. Each service is usually provided by a server component running on one or more computers (often a dedicated server computer offering multiple services) and accessed via a network by client components running on other devices. However, the client and server components can both be run on the same machine. Clients and servers will often have a user interface, and sometimes other hardware associated with it.

In computer network programming, the application layer is an abstraction layer reserved for communications protocols and methods designed for process-to-process communications across an Internet Protocol (IP) computer network. Application layer protocols use the underlying transport layer protocols to establish host-to-host connections for network services.

TCP-IP network services

Port numbers

Many Internet Protocol-based services are associated with a particular well-known port number which is standardized by the Internet technical governance. For example, World-Wide-Web servers operate on port 80, and email relay servers usually listen on port 25.

TCP versus UDP

Different services use different packet transmission techniques. In general, packets that must get through in the correct order, without loss, use TCP, whereas real time services where later packets are more important than older packets use UDP. For example, file transfer requires complete accuracy and so is normally done using TCP, and audio conferencing is frequently done via UDP, where momentary glitches may not be noticed. UDP lacks built-in network congestion avoidance and the protocols that use it must be extremely carefully designed to prevent network collapse.

Computer Learns to Recognize Sounds

In recent years, computers have gotten remarkably good at recognizing speech and images: Think of the dictation software on most cellphones, or the algorithms that automatically identify people in photos posted to Facebook.

But recognition of natural sounds — such as crowds cheering or waves crashing — has lagged behind. That’s because most automated recognition systems, whether they process audio or visual information, are the result of machine learning, in which computers search for patterns in huge compendia of training data. “Computer vision has gotten so good that we can transfer it to other domains,” says Carl Vondrick, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We’re capitalizing on the natural synchronization between vision and sound. We scale up with tons of unlabeled video to learn to understand sound.” The researchers tested their system on two standard databases of annotated sound recordings, and it was between 13 and 15 percent more accurate than the best-performing previous system. On a data set with 10 different sound categories, it could categorize sounds with 92 percent accuracy, and on a data set with 50 categories it performed with 74 percent accuracy. On those same data sets, humans are 96 percent and 81 percent accurate, respectively.

Complementary modalities

Because it takes far less power to collect and process audio data than it does to collect and process visual data, the researchers envision that a sound-recognition system could be used to improve the context sensitivity of mobile devices. When coupled with GPS data, for instance, a sound-recognition system could determine that a cellphone user is in a movie theater and that the movie has started, and the phone could automatically route calls to a prerecorded outgoing message. Similarly, sound recognition could improve the situational awareness of autonomous robots.

Visual language

The researchers’ machine-learning system is a neural network, so called because its architecture loosely resembles that of the human brain. A neural net consists of processing nodes that, like individual neurons, can perform only rudimentary computations but are densely interconnected. Information — say, the pixel values of a digital image — is fed to the bottom layer of nodes, which processes it and feeds it to the next layer, which processes it and feeds it to the next layer, and so on. The training process continually modifies the settings of the individual nodes, until the output of the final layer reliably performs some classification of the data — say, identifying the objects in the image.

Vondrick, Aytar, and Torralba first trained a neural net on two large, annotated sets of images: one, the ImageNet data set, contains labeled examples of images of 1,000 different objects; the other, the Places data set created by Oliva’s group and Torralba’s group, contains labeled images of 401 different scene types, such as a playground, bedroom, or conference room. Once the network was trained, the researchers fed it the video from 26 terabytes of video data downloaded from the photo-sharing site Flickr.

Benchmarking

To compare the sound-recognition network’s performance to that of its predecessors, however, the researchers needed a way to translate its language of images into the familiar language of sound names. So they trained a simple machine-learning system to associate the outputs of the sound-recognition network with a set of standard sound labels. For that, the researchers did use a database of annotated audio — one with 50 categories of sound and about 2,000 examples. Those annotations had been supplied by humans. But it’s much easier to label 2,000 examples than to label 2 million. And the MIT researchers’ network, trained first on unlabeled video, significantly outperformed all previous networks trained solely on the 2,000 labeled examples.