
Digital data, in information theory and information systems, is information represented as a string of discrete symbols, each of which can take on one of only a finite number of values from some alphabet, such as letters or digits. An example is a text document, which consists of a string of alphanumeric characters. The most common form of digital data in modern information systems is binary data, which is represented by a string of binary digits (bits) each of which can have one of two values, either 0 or 1.
Digital data can be contrasted with analog data, which is represented by a value from a continuous range of real numbers. Analog data is transmitted by an analog signal, which not only takes on continuous values but can vary continuously with time, a continuous real-valued function of time. An example is the air pressure variation in a sound wave.
Data requires interpretation to become information. In modern (post-1960) computer systems, all data is digital.
The word digital comes from the same source as the words digit and digitus (the Latin word for finger), as fingers are often used for counting. Mathematician George Stibitz of Bell Telephone Laboratories used the word digital in reference to the fast electric pulses emitted by a device designed to aim and fire anti-aircraft guns in 1942.[1] The term is most commonly used in computing and electronics, especially where real-world information is converted to binary numeric form as in digital audio and digital photography.
Symbol to digital conversion
[edit]This section may contain original research. (August 2016) |
Since symbols (for example, alphanumeric characters) are not continuous, representing symbols digitally is rather simpler than conversion of continuous or analog information to digital. Instead of sampling and quantization as in analog-to-digital conversion, such techniques as polling and encoding are used.
A symbol input device usually consists of a group of switches that are polled at regular intervals to see which switches are switched. Data will be lost if, within a single polling interval, two switches are pressed, or a switch is pressed, released, and pressed again. This polling can be done by a specialized processor in the device to prevent burdening the main CPU.[2] When a new symbol has been entered, the device typically sends an interrupt, in a specialized format, so that the CPU can read it.
For devices with only a few switches (such as the buttons on a joystick), the status of each can be encoded as bits (usually 0 for released and 1 for pressed) in a single word. This is useful when combinations of key presses are meaningful, and is sometimes used for passing the status of modifier keys on a keyboard (such as shift and control). But it does not scale to support more keys than the number of bits in a single byte or word.
Devices with many switches (such as a computer keyboard) usually arrange these switches in a scan matrix, with the individual switches on the intersections of x and y lines. When a switch is pressed, it connects the corresponding x and y lines together. Polling (often called scanning in this case) is done by activating each x line in sequence and detecting which y lines then have a signal, thus which keys are pressed. When the keyboard processor detects that a key has changed state, it sends a signal to the CPU indicating the scan code of the key and its new state. The symbol is then encoded or converted into a number based on the status of modifier keys and the desired character encoding.
A custom encoding can be used for a specific application with no loss of data. However, using a standard encoding such as ASCII is problematic if a symbol such as 'ß' needs to be converted but is not in the standard.
It is estimated that in the year 1986, less than 1% of the world's technological capacity to store information was digital and in 2007 it was already 94%.[3] The year 2002 is assumed to be the year when humankind was able to store more information in digital than in analog format (the "beginning of the digital age").[4][5]
States
[edit]
Digital data come in these three states: data at rest, data in transit, and data in use.[6][7] The confidentiality, integrity, and availability have to be managed during the entire lifecycle from 'birth' to the destruction of the data.[8]
Data at rest
[edit]Data at rest in information technology means data that is housed physically on computer data storage in any digital form (e.g. cloud storage, file hosting services, databases, data warehouses, spreadsheets, archives, tapes, off-site or cloud backups, mobile devices etc.). Data at rest includes both structured and unstructured data.[9] This type of data is subject to threats from hackers and other malicious threats to gain access to the data digitally or physical theft of the data storage media. To prevent this data from being accessed, modified or stolen, organizations will often employ security protection measures such as password protection, data encryption, or a combination of both. The security options used for this type of data are broadly referred to as data-at-rest protection (DARP).[10]
Definitions include:
"...all data in computer storage while excluding data that is traversing a network or temporarily residing in computer memory to be read or updated."[11]
"...all data in storage but excludes any data that frequently traverses the network or that which resides in temporary memory. Data at rest includes but is not limited to archived data, data which is not accessed or changed frequently, files stored on hard drives, USB thumb drives, files stored on backup tape and disks, and also files stored off-site or on a storage area network (SAN)."[12]
While it is generally accepted that archive data (i.e. which never changes), regardless of its storage medium, is data at rest and active data subject to constant or frequent change is data in use. “Inactive data” could be taken to mean data which may change, but infrequently. The imprecise nature of terms such as “constant” and “frequent” means that some stored data cannot be comprehensively defined as either data at rest or in use. These definitions could be taken to assume that Data at Rest is a superset of data in use; however, data in use, subject to frequent change, has distinct processing requirements from data at rest, whether completely static or subject to occasional change.
Security
[edit]Because of its nature data at rest is of increasing concern to businesses, government agencies and other institutions.[11] Mobile devices are often subject to specific security protocols to protect data at rest from unauthorized access when lost or stolen[13] and there is an increasing recognition that database management systems and file servers should also be considered as at risk;[14] the longer data is left unused in storage, the more likely it might be retrieved by unauthorized individuals outside the network.
Data encryption, which prevents data visibility in the event of its unauthorized access or theft, is commonly used to protect data in motion and increasingly promoted for protecting data at rest.[15] The encryption of data at rest should only include strong encryption methods such as AES or RSA. Encrypted data should remain encrypted when access controls such as usernames and password fail. Increasing encryption on multiple levels is recommended. Cryptography can be implemented on the database housing the data and on the physical storage where the databases are stored. Data encryption keys should be updated on a regular basis. Encryption keys should be stored separately from the data. Encryption also enables crypto-shredding at the end of the data or hardware lifecycle. Periodic auditing of sensitive data should be part of policy and should occur on scheduled occurrences. Finally, only store the minimum possible amount of sensitive data.[16]
Tokenization is a non-mathematical approach to protecting data at rest that replaces sensitive data with non-sensitive substitutes, referred to as tokens, which have no extrinsic or exploitable meaning or value. This process does not alter the type or length of data, which means it can be processed by legacy systems such as databases that may be sensitive to data length and type. Tokens require significantly less computational resources to process and less storage space in databases than traditionally encrypted data. This is achieved by keeping specific data fully or partially visible for processing and analytics while sensitive information is kept hidden. Lower processing and storage requirements makes tokenization an ideal method of securing data at rest in systems that manage large volumes of data.
A further method of preventing unwanted access to data at rest is the use of data federation[17] especially when data is distributed globally (e.g. in off-shore archives). An example of this would be a European organisation which stores its archived data off-site in the US. Under the terms of the USA PATRIOT Act[18] the American authorities can demand access to all data physically stored within its boundaries, even if it includes personal information on European citizens with no connections to the US. Data encryption alone cannot be used to prevent this as the authorities have the right to demand decrypted information. A data federation policy which retains personal citizen information with no foreign connections within its country of origin (separate from information which is either not personal or is relevant to off-shore authorities) is one option to address this concern. However, data stored in foreign countries can be accessed using legislation in the CLOUD Act.
Data in use
[edit]Data in use is an information technology term referring to active data which is stored in a non-persistent digital state or volatile memory, typically in computer random-access memory (RAM), CPU caches, or CPU registers.[19]
Data in use has also been taken to mean “active data” in the context of being in a database or being manipulated by an application. For example, some enterprise encryption gateway solutions for the cloud claim to encrypt data at rest, data in transit and data in use.[20]
Some cloud software as a service (SaaS) providers refer to data in use as any data currently being processed by applications, as the CPU and memory are utilized.[21]
Security
[edit]Because of its nature, data in use is of increasing concern to businesses, government agencies and other institutions. Data in use, or memory, can contain sensitive data including digital certificates, encryption keys, intellectual property (software algorithms, design data), and personally identifiable information. Compromising data in use enables access to encrypted data at rest and data in motion. For example, someone with access to random access memory can parse that memory to locate the encryption key for data at rest. Once they have obtained that encryption key, they can decrypt encrypted data at rest. Threats to data in use can come in the form of cold boot attacks, malicious hardware devices, rootkits and bootkits.
Encryption, which prevents data visibility in the event of its unauthorized access or theft, is commonly used to protect Data in Motion and Data at Rest and increasingly recognized as an optimal method for protecting Data in Use. There have been multiple projects to encrypt memory. Microsoft Xbox systems are designed to provide memory encryption and the company PrivateCore presently has a commercial software product vCage to provide attestation along with full memory encryption for x86 servers.[22] Several papers have been published highlighting the availability of security-enhanced x86 and ARM commodity processors.[19][23] In that work, an ARM Cortex-A8 processor is used as the substrate on which a full memory encryption solution is built. Process segments (for example, stack, code or heap) can be encrypted individually or in composition. This work marks the first full memory encryption implementation on a mobile general-purpose commodity processor. The system provides both confidentiality and integrity protections of code and data which are encrypted everywhere outside the CPU boundary.
For x86 systems, AMD has a Secure Memory Encryption (SME) feature introduced in 2017 with Epyc.[24] Intel has promised to deliver its Total Memory Encryption (TME) feature in an upcoming CPU.[25][26]
Operating system kernel patches such as TRESOR and Loop-Amnesia modify the operating system so that CPU registers can be used to store encryption keys and avoid holding encryption keys in RAM. While this approach is not general purpose and does not protect all data in use, it does protect against cold boot attacks. Encryption keys are held inside the CPU rather than in RAM so that data at rest encryption keys are protected against attacks that might compromise encryption keys in memory.
Enclaves enable an “enclave” to be secured with encryption in RAM so that enclave data is encrypted while in RAM but available as clear text inside the CPU and CPU cache. Intel Corporation has introduced the concept of “enclaves” as part of its Software Guard Extensions. Intel revealed an architecture combining software and CPU hardware in technical papers published in 2013.[27]
Several cryptographic tools, including secure multi-party computation and homomorphic encryption, allow for the private computation of data on untrusted systems. Data in use could be operated upon while encrypted and never exposed to the system doing the processing.
Data in transit
[edit]Data in transit, also referred to as data in motion[28] and data in flight,[29] is data en route between source and destination, typically on a computer network.
Data in transit can be separated into two categories: information that flows over the public or untrusted network such as the Internet and data that flows in the confines of a private network such as a corporate or enterprise local area network (LAN).[30]

In computing
[edit]Data within a computer, in most cases, moves as parallel data. Data moving to or from a computer, in most cases, moves as serial data. Data sourced from an analog device, such as a temperature sensor, may be converted to digital using an analog-to-digital converter. Data representing quantities, characters, or symbols on which operations are performed by a computer are stored and recorded on magnetic, optical, electronic, or mechanical recording media, and transmitted in the form of digital electrical or optical signals.[31] Data pass in and out of computers via peripheral devices.
Physical computer memory elements consist of an address and a byte/word of data storage. Digital data are often stored in relational databases, like tables or SQL databases, and can generally be represented as abstract key/value pairs. Data can be organized in many different types of data structures, including arrays, graphs, and objects. Data structures can store data of many different types, including numbers, strings and even other data structures.
Characteristics
[edit]Metadata helps translate data to information. Metadata is data about the data. Metadata may be implied, specified or given.
Data relating to physical events or processes will have a temporal component. This temporal component may be implied. This is the case when a device such as a temperature logger receives data from a temperature sensor. When the temperature is received it is assumed that the data has a temporal reference of now. So the device records the date, time and temperature together. When the data logger communicates temperatures, it must also report the date and time as metadata for each temperature reading.
Fundamentally, computers follow a sequence of instructions they are given in the form of data. A set of instructions to perform a given task (or tasks) is called a program. A program is data in the form of coded instructions to control the operation of a computer or other machine.[32] In the nominal case, the program, as executed by the computer, will consist of machine code. The elements of storage manipulated by the program, but not actually executed by the central processing unit (CPU), are also data. At its most essential, a single datum is a value stored at a specific location. Therefore, it is possible for computer programs to operate on other computer programs, by manipulating their programmatic data.
To store data bytes in a file, they have to be serialized in a file format. Typically, programs are stored in special file types, different from those used for other data. Executable files contain programs; all other files are also data files. However, executable files may also contain data used by the program which is built into the program. In particular, some executable files have a data segment, which nominally contains constants and initial values for variables, both of which can be considered data.
The line between program and data can become blurry. An interpreter, for example, is a program. The input data to an interpreter is itself a program, just not one expressed in native machine language. In many cases, the interpreted program will be a human-readable text file, which is manipulated with a text editor program. Metaprogramming similarly involves programs manipulating other programs as data. Programs like compilers, linkers, debuggers, program updaters, virus scanners and such use other programs as their data.
For example, a user might first instruct the operating system to load a word processor program from one file, and then use the running program to open and edit a document stored in another file. In this example, the document would be considered data. If the word processor also features a spell checker, then the dictionary (word list) for the spell checker would also be considered data. The algorithms used by the spell checker to suggest corrections would be either machine code data or text in some interpretable programming language.
In an alternate usage, binary files (which are not human-readable) are sometimes called data as distinguished from human-readable text.[33]
The total amount of digital data in 2007 was estimated to be 281 billion gigabytes (281 exabytes).[34][35]
Data keys and values, structures and persistence
[edit]Keys in data provide the context for values. Regardless of the structure of data, there is always a key component present. Keys in data and data-structures are essential for giving meaning to data values. Without a key that is directly or indirectly associated with a value, or collection of values in a structure, the values become meaningless and cease to be data. That is to say, there has to be a key component linked to a value component in order for it to be considered data.[citation needed]
Data can be represented in computers in multiple ways, as per the following examples:
RAM
[edit]Random access memory (RAM) holds data that the CPU has direct access to. A CPU may only manipulate data within its processor registers or memory. This is as opposed to data storage, where the CPU must direct the transfer of data between the storage device (disk, tape...) and memory. RAM is an array of linear contiguous locations that a processor may read or write by providing an address for the read or write operation. The processor may operate on any location in memory at any time in any order. In RAM the smallest element of data is the binary bit. The capabilities and limitations of accessing RAM are processor specific. In general main memory is arranged as an array of locations beginning at address 0 (hexadecimal 0). Each location can store usually 8 or 32 bits depending on the computer architecture.
Keys
[edit]Data keys need not be a direct hardware address in memory. Indirect, abstract and logical keys codes can be stored in association with values to form a data structure. Data structures have predetermined offsets (or links or paths) from the start of the structure, in which data values are stored. Therefore, the data key consists of the key to the structure plus the offset (or links or paths) into the structure. When such a structure is repeated, storing variations of the data values and the data keys within the same repeating structure, the result can be considered to resemble a table, in which each element of the repeating structure is considered to be a column and each repetition of the structure is considered as a row of the table. In such an organization of data, the data key is usually a value in one (or a composite of the values in several) of the columns.
Organised recurring data structures
[edit]The tabular view of repeating data structures is only one of many possibilities. Repeating data structures can be organised hierarchically, such that nodes are linked to each other in a cascade of parent-child relationships. Values and potentially more complex data-structures are linked to the nodes. Thus the nodal hierarchy provides the key for addressing the data structures associated with the nodes. This representation can be thought of as an inverted tree. Modern computer operating system file systems are a common example; and XML is another.
Sorted or ordered data
[edit]Data has some inherent features when it is sorted on a key. All the values for subsets of the key appear together. When passing sequentially through groups of the data with the same key, or a subset of the key changes, this is referred to in data processing circles as a break, or a control break. It particularly facilitates the aggregation of data values on subsets of a key.
Peripheral storage
[edit]Until the advent of bulk non-volatile memory like flash, persistent data storage was traditionally achieved by writing the data to external block devices like magnetic tape and disk drives. These devices typically seek to a location on the magnetic media and then read or write blocks of data of a predetermined size. In this case, the seek location on the media, is the data key and the blocks are the data values. Early used raw disk data file-systems or disc operating systems reserved contiguous blocks on the disc drive for data files. In those systems, the files could be filled up, running out of data space before all the data had been written to them. Thus much unused data space was reserved unproductively to ensure adequate free space for each file. Later file-systems introduced partitions. They reserved blocks of disc data space for partitions and used the allocated blocks more economically, by dynamically assigning blocks of a partition to a file as needed. To achieve this, the file system had to keep track of which blocks were used or unused by data files in a catalog or file allocation table. Though this made better use of the disc data space, it resulted in fragmentation of files across the disc, and a concomitant performance overhead due additional seek time to read the data. Modern file systems reorganize fragmented files dynamically to optimize file access times. Further developments in file systems resulted in virtualization of disc drives i.e. where a logical drive can be defined as partitions from a number of physical drives.
Indexed data
[edit]Retrieving a small subset of data from a much larger set may imply inefficiently searching through the data sequentially. Indexes are a way to copy out keys and location addresses from data structures in files, tables and data sets, then organize them using inverted tree structures to reduce the time taken to retrieve a subset of the original data. In order to do this, the key of the subset of data to be retrieved must be known before retrieval begins. The most popular indexes are the B-tree and the dynamic hash key indexing methods. Indexing is overhead for filing and retrieving data. There are other ways of organizing indexes, e.g. sorting the keys and using a binary search algorithm.
Abstraction and indirection
[edit]Object-oriented programming uses two basic concepts for understanding data and software:
- The taxonomic rank-structure of classes, which is an example of a hierarchical data structure; and
- at run time, the creation of references to in-memory data-structures of objects that have been instantiated from a class library.
It is only after instantiation that an object of a specified class exists. After an object's reference is cleared, the object also ceases to exist. The memory locations where the object's data was stored are garbage and are reclassified as unused memory available for reuse.
Database data
[edit]The advent of databases introduced a further layer of abstraction for persistent data storage. Databases use metadata, and a structured query language protocol between client and server systems, communicating over a computer network, using a two phase commit logging system to ensure transactional completeness, when saving data.
Parallel distributed data processing
[edit]Modern scalable and high-performance data persistence technologies, such as Apache Hadoop, rely on massively parallel distributed data processing across many commodity computers on a high bandwidth network. In such systems, the data is distributed across multiple computers and therefore any particular computer in the system must be represented in the key of the data, either directly, or indirectly. This enables the differentiation between two identical sets of data, each being processed on a different computer at the same time.
See also
[edit]References
[edit]- ^ Ceruzzi, Paul E (29 June 2012). Computing: A Concise History. MIT Press. ISBN 978-0-262-51767-6.
- ^ Heinrich, Lutz J.; Heinzl, Armin; Roithmayr, Friedrich (29 August 2014). Wirtschaftsinformatik-Lexikon (in German). Walter de Gruyter GmbH & Co KG. ISBN 978-3-486-81590-0.
- ^ Martin Hilbert; Priscila López (10 February 2011). "The World's Technological Capacity to Store, Communicate, and Compute Information". Science. Vol. 332, no. 6025. pp. 60–65. doi:10.1126/science.1200970. Archived (PDF) from the original on 31 May 2011. Also "Supporting online material for The World's Technological Capacity to Store, Communicate, and Compute Information" (PDF). Science. doi:10.1126/science.1200970. Archived (PDF) from the original on 31 May 2011. Free access to the article through here: www
.martinhilbert .net /WorldInfoCapacity .html / - ^ "video animation on The World's Technological Capacity to Store, Communicate, and Compute Information from 1986 to 2010". 11 June 2011. Archived from the original on 21 February 2013. Retrieved 6 November 2013 – via YouTube.
- ^ Cite error: The named reference
:0was invoked but never defined (see the help page). - ^ "Data Loss Prevention | Norton Internet Security". Nortoninternetsecurity.cc. 12 March 2011. Retrieved 26 December 2012.
- ^ "Data Protection: Data In transit vs. Data At Rest". Digital Guardian. Retrieved 12 April 2023.
- ^ "The three states of information". The University of Edinburgh. Archived from the original on 14 April 2021. Retrieved 21 February 2021.
- ^ Pickell, Devin. "Structured vs Unstructured Data – What's the Difference?". learn.g2.com. Retrieved 17 November 2020.
- ^ "Webopedia:Data at Rest". 8 June 2007.
- ^ a b "What is data at rest? - Definition from WhatIs.com". Searchstorage.techtarget.com. 22 December 2012. Retrieved 26 December 2012.
- ^ "What is data at rest? - A Word Definition From the Webopedia Computer Dictionary". Webopedia.com. 8 June 2007. Retrieved 26 December 2012.
- ^ "06-EC-O-0008: Data-At-Rest (DAR) Protection" (PDF). Department of the Army. Information Assurance Best Business Practice (IA BBP). 12 October 2006. Archived from the original (PDF) on 22 December 2016.
- ^ "IT Research, Magic Quadrants, Hype Cycles". Gartner. Archived from the original on 2 May 2004. Retrieved 26 December 2012.
- ^ Inmon, Bill (August 2005). "Encryption at Rest - Information Management Magazine Article". Information-management.com. Retrieved 26 December 2012.
- ^ "Cryptographic Storage Cheat Sheet". OWASP. Retrieved 26 December 2012.
- ^ "Information service patterns, Part 1: Data federation pattern". Ibm.com. Retrieved 26 December 2012.
- ^ "USA Patriot Act". Fincen.gov. 1 January 2002. Archived from the original on 28 December 2012. Retrieved 26 December 2012.
- ^ a b M. Henson and S. Taylor "Beyond full disk encryption:protection on security-enhanced commodity processors", "Proceedings of the 11th international conference on applied cryptography and network security", 2013
- ^ "CipherCloud Brings Encryption to Microsoft Office 365". 18 July 2012. Retrieved 1 November 2013.
- ^ "CipherCloud encrypts data across multiple cloud apps". Searchstorage.techtarget.com. 6 September 2012. Archived from the original on 29 October 2013. Retrieved 8 November 2013.
- ^ GCN, John Moore, March 12, 2014:"How to lock down data in use -- and in the cloud"
- ^ M. Henson and S. Taylor "Memory encryption: a survey of existing techniques", "ACM Computing Surveys volume 46 issue 4", 2014
- ^ "Secure Memory Encryption (SME) - x86". WikiChip.
- ^ "Total Memory Encryption (TME) - x86". WikiChip.
- ^ Salter, Jim (26 February 2020). "Intel promises Full Memory Encryption in upcoming CPUs". Ars Technica.
- ^ "Intel Software Guard Extensions (SGX) Is Mighty Interesting". Securosis. 15 July 2013. Retrieved 8 November 2013.
- ^ "Data in motion and data in transit both used on cloudsecurityalliance.org" (PDF). Archived from the original (PDF) on 15 April 2016. Retrieved 18 April 2016.
- ^ "Data in Flight | January 2010 | Communications of the ACM". January 2010.
- ^ SANS White Paper on Encryption
- ^ "Data". Lexico. Archived from the original on 23 June 2019. Retrieved 14 January 2022.
- ^ "Computer program". The Oxford pocket dictionary of current english. Archived from the original on 28 November 2011. Retrieved 11 October 2012.
- ^ "file(1)". OpenBSD manual pages. 24 December 2015. Archived from the original on 5 February 2018. Retrieved 4 February 2018.
- ^ Paul, Ryan (12 March 2008). "Study: amount of digital info > global storage capacity". Ars Technics. Archived from the original on 13 March 2008. Retrieved 13 March 2008.
- ^ Gantz, John F.; et al. (2008). "The diverse and exploding digital universe". International Data Corporation via EMC. Archived from the original on 11 March 2008. Retrieved 12 March 2008.
Properties of digital information
[edit]All digital information possesses common properties that distinguish it from analog data with respect to communications:
- Synchronization: Since digital information is conveyed by the sequence in which symbols are ordered, all digital schemes have some method for determining the beginning of a sequence. In written or spoken human languages, synchronization is typically provided by pauses (spaces), capitalization, and punctuation. Machine communications typically use special synchronization sequences.
- Language: All digital communications require a formal language, which in this context consists of all the information that the sender and receiver of the digital communication must both possess, in advance, for the communication to be successful. Languages are generally arbitrary and specify the meaning to be assigned to particular symbol sequences, the allowed range of values, methods to be used for synchronization, etc.
- Errors: Disturbances (noise) in analog communications invariably introduce some, generally small deviation or error between the intended and actual communication. Disturbances in digital communication only result in errors when the disturbance is so large as to result in a symbol being misinterpreted as another symbol or disturbing the sequence of symbols. It is generally possible to have near-error-free digital communication. Further, techniques such as check codes may be used to detect errors and correct them through redundancy or re-transmission. Errors in digital communications can take the form of substitution errors, in which a symbol is replaced by another symbol, or insertion/deletion errors, in which an extra incorrect symbol is inserted into or deleted from a digital message. Uncorrected errors in digital communications have an unpredictable and generally large impact on the information content of the communication.
- Copying: Because of the inevitable presence of noise, making many successive copies of an analog communication is infeasible because each generation increases the noise. Because digital communications are generally error-free, copies of copies can be made indefinitely.
- Granularity: The digital representation of a continuously variable analog value typically involves a selection of the number of symbols to be assigned to that value. The number of symbols determines the precision or resolution of the resulting datum. The difference between the actual analog value and the digital representation is known as quantization error. For example, if the actual temperature is 23.234456544453 degrees, but only two digits (23) are assigned to this parameter in a particular digital representation, the quantizing error is 0.234456544453. This property of digital communication is known as granularity.
- Compressible: According to Miller, "Uncompressed digital data is very large, and in its raw form, it would actually produce a larger signal (therefore be more difficult to transfer) than analog data. However, digital data can be compressed. Compression reduces the amount of bandwidth space needed to send information. Data can be compressed, sent, and then decompressed at the site of consumption. This makes it possible to send much more information and results in, for example, digital television signals offering more room on the airwave spectrum for more television channels."[1]
Historical digital systems
[edit]Even though digital signals are generally associated with the binary electronic digital systems used in modern electronics and computing, digital systems are actually ancient, and need not be binary or electronic.
- DNA genetic code is a naturally occurring form of digital data storage.
- Written text (due to the limited character set and the use of discrete symbols – the alphabet in most cases)
- The abacus was created sometime between 1000 BC and 500 BC, it later became a form of calculation frequency. Nowadays it can be used as a very advanced, yet basic digital calculator that uses beads on rows to represent numbers. Beads only have meaning in discrete up and down states, not in analog in-between states.
- A beacon is perhaps the simplest non-electronic digital signal, with just two states (on and off). In particular, smoke signals are one of the oldest examples of a digital signal, where an analog "carrier" (smoke) is modulated with a blanket to generate a digital signal (puffs) that conveys information.
- Morse code uses six digital states—dot, dash, intra-character gap (between each dot or dash), short gap (between each letter), medium gap (between words), and long gap (between sentences)—to send messages via a variety of potential carriers such as electricity or light, for example using an electrical telegraph or a flashing light.
- The Braille uses a six-bit code rendered as dot patterns.
- Flag semaphore uses rods or flags held in particular positions to send messages to the receiver watching them some distance away.
- International maritime signal flags have distinctive markings that represent letters of the alphabet to allow ships to send messages to each other.
- More recently invented, a modem modulates an analog "carrier" signal (such as sound) to encode binary electrical digital information, as a series of binary digital sound pulses. A slightly earlier, surprisingly reliable version of the same concept was to bundle a sequence of audio digital "signal" and "no signal" information (i.e. "sound" and "silence") on magnetic cassette tape for use with early home computers.
See also
[edit]- Analog-to-digital converter
- Barker code
- Binary number
- Comparison of analog and digital recording
- Computer data storage
- Data remanence
- Digital architecture
- Digital art
- Digital control
- Digital divide
- Digital electronics
- Digital infinity
- Digital native
- Digital physics
- Digital recording
- Digital Revolution
- Digital video
- Digital-to-analog converter
- Internet forum
References
[edit]- ^ Miller, Vincent (2011). Understanding digital culture. London: Sage Publications. sec. "Convergence and the contemporary media experience". ISBN 978-1-84787-497-9.
Further reading
[edit]- Tocci, R. 2006. Digital Systems: Principles and Applications (10th Edition). Prentice Hall. ISBN 0-13-172579-3