Category Archives: Computer

Accessing Online Learning Material

Educational technology is the use of both physical hardware and educational theoretics. It encompasses several domains, including learning theory, computer-based training, online learning, and, where mobile technologies are used, m-learning. Accordingly, there are several discrete aspects to describing the intellectual and technical development of educational technology:Educational technology is “the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources”.

  • educational technology as the theory and practice of educational approaches to learning
  • educational technology as technological tools and media that assist in the communication of knowledge, and its development and exchange
  • educational technology for learning management systems (LMS), such as tools for student and curriculum management, and education management information systems (EMIS)
  • educational technology as back-office management, such as Training Management System for logistics and budget management, and Learning Record Store (LRS) for learning data storage and analysis.
  • educational technology itself as an educational subject; such courses may be called “Computer Studies” or “Information and communications technology (ICT)”.

An educational technologist is someone who is trained in the field of educational technology. Educational technologists try to analyze, design, develop, implement and evaluate process and tools to enhance learning.

Accessing learning materials, that is, lecture slides, video lectures, shared assignments, and forum messages, is the most frequently performed online learning activity. However, students with different purposes, motivations, and preferences may exhibit different behaviors when accessing these materials. These different behaviors may further affect their learning performance. This study analyzed system logs recorded by a Learning Management System in which 59 computer science students participated in a blended learning course to learn mobile phone programming. The results revealed several significant findings. First, the students viewed the learning materials related to their classroom lectures (i.e., lecture slides and video lectures) for longer and more often than other learning materials (i.e., shared assignments and posted messages). Second, although the students spent a great deal of time viewing the online learning materials, most did not use annotation tools. Third, students’ viewing behaviors showed great variety and were clustered into three behavior patterns: “consistent use students” who intensively used all of the learning materials, “slide intensive use students” who intensively used the lecture slides, and “less use students” who infrequently used any learning material. These different behavior patterns were also associated with their motivation and learning performance. The results are discussed, and several suggestions for teachers, researchers, and system designers are proposed.

Keywords:

  • Distributed learning environments
  • Media in education
  • Post-secondary education
  • Teaching/learning strategies

Given this definition, educational technology is an inclusive term for both the material tools and the theoretical foundations for supporting learning and teaching. Educational technology is not restricted to high technology. Education technology is anything that enhances classroom learning in the utilization of blended or online learning.

However, modern electronic educational technology is an important part of society today.[11] Educational technology encompasses e-learning, instructional technology, information and communication technology (ICT) in education, EdTech, learning technology, multimedialearning, technology-enhanced learning (TEL), computer-based instruction (CBI), computer managed instruction, computer-based training (CBT), computer-assisted instruction or computer-aided instruction (CAI), internet-based training (IBT), flexible learning, web-based training (WBT), online education, digital educational collaboration, distributed learning, computer-mediated communication, cyber-learning, and multi-modal instruction, virtual education, personal learning environments, networked learning, virtual learning environments (VLE) (which are also called learning platforms), m-learning, ubiquitous learning and digital education.

Software Developer

A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are often used with similar meanings are programmer, software analyst, and software engineer. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent. Already in the current market place there can be found a segregation between programmers and developers, being that one who implements is not the same as the one who designs the class structure or hierarchy. Even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. (see also Debate over who is a software engineer). In a large company, there may be employees whose sole responsibility consists of only one of the phases above. In smaller development environments, a few people or even a single individual might handle the complete process.

A software developer must have a relevant BTEC or HND in any field such as computer science, information technology, engineering, programming, or any other IT related post graduate studies. An ideal software developer is a self-motivated professional carrying a dynamic hands-on experience on key languages of programming such as C++, Javascript, VB, Oracle, UML, Linux, Python, UNIX, XML, HTTP, Smalltalk, Other software testing tools etc.

The key skills required are:

  • Debugging & Problem solving approach
  • Excellent knowledge and understanding of tools and technology
  • Unmatched interpersonal skills
  • Ability to thrive under pressure for long work hours
  • Excellent communication skills
  • Pressure handling skills

Layla Shaikley SM ’13 began her master’s in architecture at MIT with a hunger to redevelop nations recovering from conflict. When she decided that data and logistics contributed more immediately to development than architecture did, ­Shaikley switched to the Media Lab to work with Professor Sandy ­Pentland, and became a cofounder of Wise Systems, which develops routing software that helps companies deliver goods and services.

But Shaikley is perhaps better known for a viral video, “Muslim Hipsters: #mipsterz,” that she and friends created to combat the media stereotypes of Muslim women. It reached hundreds of thousands of viewers and received vigorous positive and negative feedback.

The video “is a really refreshing, jovial view of an underrepresented identity: young American Muslim women with alternative interests in the arts and culture,” Shaikley says. “The narrow media image is so far from the real fabric of Muslim-­American life that we all need to add our pieces to the quilt to create a more accurate image.”

Shaikley’s parents moved from Iraq to California in the 1970s, and she and her five siblings enjoyed a “quintessentially all-­American childhood,” she says. “I grew up on a skateboard, and I love to surf and snowboard.” She feels deeply grateful to her parents, who “always put our needs first,” she adds. “When we visited relatives in Iraq, we observed what life is like when people don’t have the privilege of a free society. Those experiences really shaped my understanding of the world and also my sense of responsibility to give back.”

Shaikley says the sum of her diverse life experiences has helped her as a professional with Wise Systems and as a voice for underrepresented Muslim women.

 

Software Bugs

A software bug is an error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways.

Most bugs arise from mistakes and errors made in either a program’s source code or its design, or in components and operating systems used by such programs. A few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy (defective). Bugs trigger errors that may have ripple effects. Bugs may have subtle effects or cause the program to crash or freeze the computer. Others qualify as security bugs and might, for example, enable a malicious user to bypass access controls in order to obtain unauthorized privileges.

The software industry has put much effort into reducing bug counts. These include:

Typographical errors

Bugs usually appear when the programmer makes a logic error. Various innovations in programming style and defensive programming are designed to make these bugs less likely, or easier to spot. Some typos, especially of symbols or logical/mathematical operators, allow the program to operate incorrectly, while others such as a missing symbol or misspelled name may prevent the program from operating. Compiled languages can reveal some typos when the source code is compiled.

Development methodologies

Several schemes assist managing programmer activity so that fewer bugs are produced. Software engineering (which addresses software design issues as well) applies many techniques to prevent defects. For example, formal program specifications state the exact behavior of programs so that design bugs may be eliminated. Unfortunately, formal specifications are impractical for anything but the shortest programs, because of problems of combinatorial explosion and indeterminacy.

Unit testing involves writing a test for every function (unit) that a program is to perform. In test-driven development unit tests are written before the code and the code is not considered complete until all tests complete successfully. Agile software development involves frequent software releases with relatively small changes. Defects are revealed by user feedback.

Programming language support

Programming languages include features to help prevent bugs, such as static type systems, restricted namespaces and modular programming. For example, when a programmer writes (pseudocode) LET REAL_VALUE PI = "THREE AND A BIT", although this may be syntactically correct, the code fails a type check. Compiled languages catch this without having to run the program. Interpreted languages catch such errors at runtime. Some languages deliberate exclude features that easily lead to bugs, at the expense of slower performance: the general principle being that, it is almost always better to write simpler, slower code than inscrutable code that runs slightly faster, especially considering that maintenance cost is substantial. For example, the Java programming language does not support pointer arithmetic; implementations of some languages such as Pascal and scripting languages often have runtime bounds checking of arrays, at least in a debugging build.

Code analysis

Tools for code analysis help developers by inspecting the program text beyond the compiler’s capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (see halting problem), these tools exploit the fact that human programmers tend to make certain kinds of simple mistakes often when writing software.

Instrumentation

Tools to monitor the performance of the software as it is running, either specifically to find problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement saying PRINT "I AM HERE"), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten.

Testing

Software testers are people whose primary task is to find bugs, or write code to support testing. On some projects, more resources may be spent on testing than in developing the program.

Measurements during testing can provide an estimate of the number of likely bugs remaining; this becomes more reliable the longer a product is tested and developed.[citation needed]

Debugging

The typical bug history (GNU Classpath project data). A new bug submitted by the user is unconfirmed. Once it has been reproduced by a developer, it is a confirmed bug. The confirmed bugs are later fixed. Bugs belonging to other categories (unreproducible, will not be fixed, etc.) are usually in the minority. Finding and fixing bugs, or debugging, is a major part of computer programming. Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that much of the rest of his life would be spent finding mistakes in his own programs.

Usually, the most difficult part of debugging is finding the bug. Once it is found, correcting it is usually relatively easy. Programs known as debuggers help programmers locate bugs by executing code line by line, watching variable values, and other features to observe program behavior. Without a debugger, code may be added so that messages or values may be written to a console or to a window or log file to trace program execution or show values. More typically, the first step in locating a bug is to reproduce it reliably. Once the bug is reproducible, the programmer may use a debugger or other tool while reproducing the error to find the point at which the program went astray.

Bug management

Bug management includes the process of documenting, categorizing, assigning, reproducing, correcting and releasing the corrected code. Proposed changes to software – bugs as well as enhancement requests and even entire releases – are commonly tracked and managed using bug tracking systems or issue tracking systems. The items added may be called defects, tickets, issues, or, following the agile development paradigm, stories and epics. Categories may be objective, subjective or a combination, such as version number, area of the software, severity and priority, as well as what type of issue it is, such as a feature request or a bug.

 

Communications protocol

In telecommunications, a communication protocol is a system of rules that allow two or more entities of a communications system to transmit information via any kind of variation of a physical quantity. The protocol defines the rules syntax, semantics and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.

Communicating systems use well-defined formats (protocol) for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses pre-determined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communications protocols have to be agreed upon by the parties involved. To reach agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communications what programming languages are to computations.

Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together are known as a protocol suite; when implemented in software they are a protocol stack.

Internet communication protocols are published by the Internet Engineering Task Force (IETF). The IEEE handles wired and wireless networking; the International Organization for Standardization (ISO) other types. The ITU-T handles telecommunications protocols and formats for the public switched telephone network (PSTN). As the PSTN and Internet converge, the standards are also being driven towards convergence.

Protocols are to communications what algorithms or programming languages are to computations.

This analogy has important consequences for both the design and the development of protocols. One has to consider the fact that algorithms, programs and protocols are just different ways of describing expected behavior of interacting objects. A familiar example of a protocolling language is the HTML language used to describe web pages which are the actual web protocols.

In programming languages the association of identifiers to a value is termed a definition. Program text is structured using block constructs and definitions can be local to a block. The localized association of an identifier to a value established by a definition is termed a binding and the region of program text in which a binding is effective is known as its scope. The computational state is kept using two components: the environment, used as a record of identifier bindings, and the store, which is used as a record of the effects of assignments.

That’s not because TCP is perfect or because computer scientists have had trouble coming up with possible alternatives; it’s because those alternatives are too hard to test. The routers in data center networks have their traffic management protocols hardwired into them. Testing a new protocol means replacing the existing network hardware with either reconfigurable chips, which are labor-intensive to program, or software-controlled routers, which are so slow that they render large-scale testing impractical. The system maintains a compact, efficient computational model of a network running the new protocol, with virtual data packets that bounce around among virtual routers. On the basis of the model, it schedules transmissions on the real network to produce the same traffic patterns. Researchers could thus run real web applications on the network servers and get an accurate sense of how the new protocol would affect their performance.

Traffic control

Each packet of data sent over a computer network has two parts: the header and the payload. The payload contains the data the recipient is interested in — image data, audio data, text data, and so on. The header contains the sender’s address, the recipient’s address, and other information that routers and end users can use to manage transmissions. When multiple packets reach a router at the same time, they’re put into a queue and processed sequentially. With TCP, if the queue gets too long, subsequent packets are simply dropped; they never reach their recipients. When a sending computer realizes that its packets are being dropped, it cuts its transmission rate in half, then slowly ratchets it back up. A better protocol might enable a router to flip bits in packet headers to let end users know that the network is congested, so they can throttle back transmission rates before packets get dropped. Or it might assign different types of packets different priorities, and keep the transmission rates up as long as the high-priority traffic is still getting through. These are the types of strategies that computer scientists are interested in testing out on real networks.

Speedy simulation

When a server on the real network wants to transmit data, it sends a request to the emulator, which sends a dummy packet over a virtual network governed by the new protocol. When the dummy packet reaches its destination, the emulator tells the real server that it can go ahead and send its real packet. If, while passing through the virtual network, a dummy packet has some of its header bits flipped, the real server flips the corresponding bits in the real packet before sending it. If a clogged router on the virtual network drops a dummy packet, the corresponding real packet is never sent. And if, on the virtual network, a higher-priority dummy packet reaches a router after a lower-priority packet but jumps ahead of it in the queue, then on the real network, the higher-priority packet is sent first.

The servers on the network thus see the same packets in the same sequence that they would if the real routers were running the new protocol. There’s a slight delay between the first request issued by the first server and the first transmission instruction issued by the emulator. But thereafter, the servers issue packets at normal network speeds. The ability to use real servers running real web applications offers a significant advantage over another popular technique for testing new network management schemes: software simulation, which generally uses statistical patterns to characterize the applications’ behavior in a computationally efficient manner.

 

Taming Big Data

Big Data can be a beast. Data volumes are growing exponentially.The types of data being created are likewise proliferating. And the speed at which data is being created – and the need to analyze it in near real-time to derive value from it – is increasing with each passing hour.

But Big Data can be tamed. We’ve got living proof. Thanks to new approaches for processing, storing and analyzing massive volumes of multi-structured data – such as Hadoop and MPP analytic databases — enterprises of all types are uncovering new and valuable insights from Big Data everyday.

Leading the way are Web giants like Facebook, LinkedIn and Amazon. Following close behind are early adopters in financial services, healthcare and media. And now it’s your turn. From marketing campaign analysis and social graph analysis to network monitoring, fraud detection and risk modeling, there’s unquestionably a Big Data use case out there with your company’s name on it.

In an era where Big Data can greatly impact a broad population, many novel opportunities arise, chief among them the ability to integrate data from diverse sources and “wrangle” it to extract novel insights. Conceived as a tool that can help both expert and non-expert users better understand public data, MATTERS was collaboratively developed by the Massachusetts High Tech Council, WPI and other institutions as an analytic platform offering dynamic modeling capabilities. MATTERS is an integrative data source on high fidelity cost and talent competitiveness metrics. Its goal is to extract, integrate and model rich economic, financial, educational and technological information from renowned heterogeneous web data sources ranging from The US Census Bureau, The Bureau of Labor Statistics to the Institute of Education Sciences, all known to be critical factors influencing economic competitiveness of states. This demonstration of MATTERS illustrates how we tackle challenges of data acquisition, cleaning, integration and wrangling into appropriate representations, visualization and story-telling with data in the context of state competitiveness in the high-tech sector.

Unstructured data refers to information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, numbers, and facts as well. This results in irregularities and ambiguities that make it difficult to understand using traditional programs as compared to data stored in fielded form in databases or annotated (semantically tagged) in documents.

Data in greater volume, velocity and variety has some business leaders riding a big data analytics tiger in search of new commercial opportunities. Now, several years into the big data era, some taming of the tiger seems to be in order. That’s according to several data professionals from large enterprises on hand at IBM’s recent Information On Demand (IOD) conference in Las Vegas. While they see potential for a new breed of data-driven applications, they also see a need to reign in unbridled efforts, which means applying more rigorous planning, refining analytics skills and instituting more data governance.

Data variety and ‘dangerous insights’

Telecommunications giant Verizon Wireless, also headquartered in New York, has always had data volume and velocity to deal with. What’s new is the variety of data that big data analytics must work with, said Ksenija Draskovic, a Verizon predictive analytics and data science manager, who discussed the implications for predictive analytics in another IOD session.

C-suite buy-in of big data kind

Madhavan said he and his JP Morgan Chase colleagues have worked to create better planning methodologies to deal with big data. Steps are in place to ensure business users have an idea beforehand of the kind of data they want to work with, what business goals they hope to achieve, and what kind of revenue can be expected if the new application is wildly successful.

Big Data

Big data is a term for data sets that are so large or complex that traditional data processing application softwareis inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating and information privacy.

Lately, the term “big data” tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set. “There is little doubt that the quantities of data now available are indeed large, but that’s not the most relevant characteristic of this new data ecosystem.” Analysis of data sets can find new correlations to “spot business trends, prevent diseases, combat crime and so on.” Scientists, business executives, practitioners of medicine, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet search, fintech, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics, connectomics, complex physics simulations, biology and environmental research.

Relational database management systems and desktop statistics- and visualization-packages often have difficulty handling big data. The work may require “massively parallel software running on tens, hundreds, or even thousands of servers”. What counts as “big data” varies depending on the capabilities of the users and their tools, and expanding capabilities make big data a moving target. “For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration.”

  • The Brain’s On-Ramps
When facing a challenging scientific problem, researchers often turn to supercomputers. These powerful machines crunch large amounts of data and, with the right software, spin out images that make the data easier to understand. Advanced computational methods and technologies, for instance, can provide unprecedented maps of the human brain. In the image at left, the colors represent white matter pathways that allow distant parts of the brain to communicate with each other. For over 30 years, the National Science Foundation has invested in high-performance computing, both pushing the frontiers of advanced computing hardware and software and providing access to supercomputers for researchers across a range of disciplines. Use of NSF-supported research cyberinfrastructure resources is at an all-time high and continues to increase across all science and engineering disciplines.

  • Stellar Turbulence

Simulations help astrophysicists understand and model the turbulent mixing of star gases. This image, created at the Pittsburgh Supercomputing Center (PSC), depicts a 3-D mixing layer between two fluids of different densities in a gravitational field. In this case, a heavy gas is on top of a lighter one. This type of mixing plays an essential role in stellar convection. Understanding mixing dynamics will help researchers with a long-term goal of visualizing the turbulent flows of an entire giant star, one similar to the sun.  PSC is a leading partner in NSF’s eXtreme Science and Engineering Discovery Environment (XSEDE), which provides researchers and educators with the most powerful collection of advanced digital resources in the world.

  • Solar Fury
This glowing maelstrom results from magnetic arcs (orange lines) that shoot hundreds of thousands of kilometers above the sun’s surface. When the electrically charged arcs destabilize, they cause plasma to erupt from the sun’s surface. If one eruption follows another, the second will spew forth faster than the first, catching up and merging with it. Researchers at National Center for Supercomputing Applications’ Advanced Visualization Laboratory are simulating events that trigger solar eruptions to aid prediction of future solar storms. Such advances will improve preparations to mitigate and prevent the storms’ worst effects such as knocking out the electric power grid and disrupting satellite communications.

  • Churning Out a Supernova
This 3-D simulation captures the dynamics that lead to the explosive birth of a supernova. The red colors indicate hot chaotic material and the blue show cold, inert material. Two giant polar lobes form when strongly magnetized material ejected from the star’s center distorts and fails to launch cleanly away. The simulation was created on Stampede, a supercomputer at the Texas Advanced Computing Center.
  • Spinning Hydrogen
On the bridge of the Allosphere, one of the largest immersive scientific instruments in the world, researchers interact with a spinning hydrogen atom. The bridge runs through the center of the spherical display, which includes stereo video projectors covering the entire visual field, immersive audio, and devices to sense, track and engage users. Located at the University of California, Santa Barbara, the Allosphere allows researchers to visualize, explore and evaluate scientific data too small to see and hear. By magnifying the information to the human scale, researchers can better analyze the data to gain new insights into challenging problems.

Hacking discrimination

The 11 pitches presented during the two-day hackathon covered a wide range of issues affecting communities of color, including making routine traffic stops less harmful for motorists and police officers, preventing bias in the hiring process by creating a professional profile using a secure blockchain system, flagging unconscious biases using haptic (touch-based) feedback and augmented reality, and providing advice for those who experience discrimination.

The Innovation prize was awarded to Taste Voyager, a platform that enables individuals or families to host guests and foster cultural understanding over a home-cooked meal. The Impact prize went to Rahi, a smartphone app that makes shopping easier for recipients of the federally funded Women, Infant, and Children food-assistance program. The Storytelling prize was awarded to Just-Us and Health, which uses surveys to track the effects of discrimination in neighborhoods. With a human-centered design process as the guideline, Punjwani encouraged participants to speak with people affected by the problem and carefully define their target audience. For some, including the Taste Voyager team, which began the hackathon as Immigrant Integration, this resulted in an overhaul of the project. Examining their target audience led the team to switch their focus from helping immigrants integrate to creating a way for people of different backgrounds to connect and help each other in a safe space.

The Rahi team, which was led by Hildreth England, assistant director of the Media Lab’s Open Agriculture Initiative, also focused on the user as it attempted to improve the national Women, Infants, and Children (WIC) nutrition program by acknowledging the racial and ethnic inequalities embedded in the food system. For example, according to Feeding America, one in five African-American and Latino households is food insecure — lacking consistent and adequate access to affordable and nutritious food — compared to one in 10 Caucasian households. During the first day of the event, speeches by Kirk Kolenbrander, vice president at MIT; J. Phillip Thompson, associate professor of urban studies and planning; and Shannon Al-Wakeel, executive director of the Muslim Justice League, reminded participants of the past and current social justice issues needing solutions. The following morning, in a keynote address, Pinkett stressed the strengths and weaknesses that come with cultural differences. “Our greatest strength is our diversity; our greatest liability is in our cultural ignorance,” he said.

A Hacking Discrimination Fund, which was announced at the event, has been created to support undergraduate and graduate students addressing racism and discrimination through events such as the hackathon, development of sustainable community dialogue, contest development, and other activities that specifically address racism in the U.S. The fund’s emphasis will be placed on solutions that aim to overcome challenges to safety or economic and professional success for populations that have historically been victims of racism. Alumnae organizers Egbuonu-Davis and Harris worked closely with a number of collaborators to launch the inaugural event. Contributors included Punjwani; Leo Anthony G. Celi SM ’09, a principal research scientist at the MIT Institute of Medical Engineering and Science; Trishan Panch, an MIT lecturer, primary care physician, and co-founder and Chief Medical Officer at Wellframe; and Marzyeh Ghassemi and Tristan Naumann, both MIT CSAIL PhD candidates.

Online Conversations

Conversation is interactive, communication between two or more people. The development of conversational skills and etiquette is an important part of socialization. The development of conversational skills in a new language is a frequent focus of language teaching and learning. Conversation analysis is a branch of sociology which studies the structure and organization of human interaction, with a more specific focus on conversational interaction.

Online chat may refer to any kind of communication over the Internet that offers a real-time transmission of text messages from sender to receiver. Chat messages are generally short in order to enable other participants to respond quickly. Thereby, a feeling similar to a spoken conversation is created, which distinguishes chatting from other text-based online communication forms such as Internet forums and email. Online chat may address point-to-point communications as well as multicastcommunications from one sender to many receivers and voice and video chat, or may be a feature of a web conferencingservice.

Online chat in a less stringent definition may be primarily any direct text-based or video-based (webcams), one-on-one chat or one-to-many group chat (formally also known as synchronous conferencing), using tools such as instant messengers, Internet Relay Chat (IRC), talkers and possibly MUDs. The expression online chat comes from the word chat which means “informal conversation”. Online chat includes web-based applications that allow communication – often directly addressed, but anonymous between users in a multi-user environment. Web conferencing is a more specific online service, that is often sold as a service, hosted on a web server controlled by the vendor.

No generally accepted definition of conversation exists, beyond the fact that a conversation involves at least two people talking together. Consequently, the term is often defined by what it is not. A ritualized exchange such as a mutual greeting is not a conversation, and an interaction that includes a marked status differential (such as a boss giving orders) is also not a conversation. An interaction with a tightly focused topic or purpose is also generally not considered a conversation. Summarizing these properties, one authority writes that “Conversation is the kind of speech that happens informally, symmetrically, and for the purposes of establishing and maintaining social ties.”

From a less technical perspective, a writer on etiquette in the early 20th century defined conversation as the polite give and take of subjects thought of by people talking with each other for company.

Conversations follow rules of etiquette because conversations are social interactions, and therefore depend on social convention. Specific rules for conversation arise from the cooperative principle. Failure to adhere to these rules causes the conversation to deteriorate or eventually to end. Contributions to a conversation are responses to what has previously been said.

Conversations may be the optimal form of communication, depending on the participants’ intended ends. Conversations may be ideal when, for example, each party desires a relatively equal exchange of information, or when the parties desire to build social ties. On the other hand, if permanency or the ability to review such information is important, written communication may be ideal. Or if time-efficient communication is most important, a speech may be preferable.

Conversation involves a lot more nuanced and implied context that lies beneath just the words.

How to Start a Conversation Online

  1. Stop thinking so much about it. If you’re trying to get to know someone (and, perhaps, to woo them), the goal of these first few online conversations is to help them understand who you are as a person. You want to be yourself, and a script will only get you so far.
  • Striking up a conversation online is hard for almost everyone. You’re not the first, and you won’t be the last.
  • Worst case, it’ll be a learning experience. Best case, you’ll connect with somebody in a deep way. Neither case applies until you try.

2. Pick a convenient time. Try to message the person when they’re online. It may be easier to get a conversation going in real-time than to count on someone to respond later on.

  • Pick a time when you don’t have anywhere to be. You don’t want to be stressed-out, and you want to give the conversation a chance to grow.

3. Start small. Send the person a short message and ask them how they’re doing. A “Hey. How’s it going?” will do. You may find that you feel much looser once you get the conversation going–there’s no turning back now!

  • They will likely respond with how they’re doing, then ask you how you’re doing. Be prepared to say how you’re doing.
  • Avoid dead-end answers like “I’m good.” Anyone can be “good”. Respond with something that tells your conversation partner about who you are, such as “I’m good! My friend and I explored this abandoned house up in the hills today. It was really cool but super spooky” or “My dance team just made it to nationals. I’m so excited!”
  • Mention things that make you seem interesting, but avoid bragging.

4. Ask about a common interest. This is a classic, tried-and-true conversation opener. If you’re in a class together, ask what the homework assignment is. If you’re in a club together, ask about an upcoming club event. This can break the ice in a very natural way, opening the gates to a deeper talk.

  • Try something like this: “Hey- I completely blanked and forgot to write down the homework for English today. Did you happen to get it?”
  • Or this: “Hey, do you know when our next track meet is? I must have tuned out when coach announced it during practice today…”

5. Compliment the person. If a person does something worthy of praise, it’s natural to compliment them. This can be another great way to break the ice and make the person feel appreciated. Don’t overdo it–be sparing with your compliments, or they may come across as flattery.

  • If you’re in a class together: “You did a great job on your presentation today! I never thought I’d learn so much about Ulysses S. Grant!”
  • If you’re on a team together: “Nice work in the 100-yard sprint at the meet today. You really put the team on your back”

6. Ask a question. If you’ve met someone on a dating site like OKCupid or a dating app like Tinder, then you probably don’t have any real-life connections to talk about. Ask the person an open-ended question about themselves. Take your inspiration from their profile.

  • For example: “I see you’re into hip hop. Been to any good shows lately?”
  • Or: “I dig your beard. How long have you been growing that sucker?”

7. Be careful with stock pickup lines. Pickup lines can backfire: they work on some people, but they turn off others. These lines can come across as cheesy or manipulative, especially if they aren’t something that you thought of yourself. Try to come across as genuine, and if that includes a pickup line–then you do you!

New 3D chip

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature, by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford. The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies. The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3D architecture promises to address the communication bottleneck.

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases. Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says. Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

The team is working to improve the underlying nanotechnologies, while exploring the new 3D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip. So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker. This work was funded by the Defense Advanced Research Projects Agency, the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Optical Quantum Computing

An efficient way to make photons interact could open new prospects for both classical optics and quantum computing, an experimental technology that promises large speedups on some types of calculations. In recent years, physicists have enabled photon-photon interactions using atoms of rare elements cooled to very low temperatures. But in the latest issue of Physical Review Letters, MIT researchers describe a new technique for enabling photon-photon interactions at room temperature, using a silicon crystal with distinctive patterns etched into it. In physics jargon, the crystal introduces “nonlinearities” into the transmission of an optical signal.

Photonic independence

Quantum computers harness a strange physical property called “superposition,” in which a quantum particle can be said to inhabit two contradictory states at the same time. The spin, or magnetic orientation, of an electron, for instance, could be both up and down at the same time; the polarization of a photon could be both vertical and horizontal. If a string of quantum bits — or qubits, the quantum analog of the bits in a classical computer — is in superposition, it can, in some sense, canvass multiple solutions to the same problem simultaneously, which is why quantum computers promise speedups.

Most experimental qubits use ions trapped in oscillating magnetic fields, superconducting circuits, or — like Englund’s own research — defects in the crystal structure of diamonds. With all these technologies, however, superpositions are difficult to maintain. The quantum state of one of the photons can thus be thought of as controlling the quantum state of the other. And quantum information theory has established that simple quantum “gates” of this type are all that is necessary to build a universal quantum computer.

Unsympathetic resonance

The researchers’ device consists of a long, narrow, rectangular silicon crystal with regularly spaced holes etched into it. The holes are widest at the ends of the rectangle, and they narrow toward its center. Connecting the two middle holes is an even narrower channel, and at its center, on opposite sides, are two sharp concentric tips. The pattern of holes temporarily traps light in the device, and the concentric tips concentrate the electric field of the trapped light.

Ordinarily, that shift is mild enough to be negligible. But because the sharp tips in the researchers’ device concentrate the electric fields of entering photons, they also exaggerate the shift. A single photon could still get through the device. But if two photons attempted to enter it, the shift would be so dramatic that they’d be repulsed.

Practical potential

The device can be configured so that the dramatic shift in resonance frequency occurs only if the photons attempting to enter it have particular quantum properties — specific combinations of polarization or phase, for instance. The quantum state of one photon could thus determine the way in which the other photon is handled, the basic requirement for a quantum gate.

Englund emphasizes that the new research will not yield a working quantum computer in the immediate future. Too often, light entering the prototype is still either scattered or absorbed, and the quantum states of the photons can become slightly distorted. But other applications may be more feasible in the near term. For instance, a version of the device could provide a reliable source of single photons, which would greatly abet a range of research in quantum information science and communications.

A scheme for efficient quantum computation with linear optics

Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.