Frequency division multiplexing (FDM) is a technology that transmits multiple signals simultaneously over a single transmission path, such as a cable or wireless system. Each signal travels within its own unique frequency range (carrier), which is modulated by the data. Orthogonal FDM (OFDM) spread spectrum technique distributes the data over a large number of carriers that are spaced apart at precise frequencies. This spacing provides the "orthogonality" in this technique which prevents the demodulators from seeing frequencies other than their own. It is identical to Coded FDM and the Discrete Multitone (DMT) modulation.
Orthogonality
In OFDM, the sub-carrier frequencies are chosen so that the sub-carriers are orthogonal to each other, meaning that cross-talk between the sub-channels is eliminated and inter-carrier guard bands are not required. This greatly simplifies the design of both the transmitter and the receiver. A separate requirement for different filters is thus eliminated. This results in high spectral efficiency, resiliency to RF interference, and lower multi-path distortion. But this also means high accuracies in synchronization between transmitter and receiver is required.
OFDM exhibits lower multi-path distortion (delay spread), since the sub-signals are sent at lower data rates. Because of the lower data rate transmissions, multi-path-based delays are not nearly as significant as they would be with a single-channel high-rate system. For example, a narrow band signal sent at a high rate over a single channel will likely experience greater negative effects from delay spread because the transmitted symbols are closer together. In fact, the information content of a narrow band signal can be completely lost at the receiver if the multi path distortion causes the frequency response to have a null at the transmission frequency. The use of the multi-carrier OFDM significantly reduces this problem.
Simple Implementation
The orthogonality allows for efficient modulator and demodulator implementation using the FFT algorithm on the receiver side, and inverse FFT on the sender side. Although the principles and some of the benefits have been known since the 1960s, OFDM is popular for wideband communications today by way of low-cost digital signal processing components that can efficiently calculate the FFT.
Elimination of intersymbol interference
One key principle of OFDM is that since low symbol rate modulation schemes i.e. where the symbols are relatively long compared to the channel time characteristics suffer less from inter symbol interference caused by multi path propagation, it is advantageous to transmit a number of low-rate streams in parallel instead of a single high-rate stream. Since the duration of each symbol is long, it is feasible to insert a guard interval between the OFDM symbols, thus eliminating the inter symbol interference. The cyclic prefix, which is transmitted during the guard interval, consists of the end of the OFDM symbol copied into the guard interval, and the guard interval is transmitted followed by the OFDM symbol. The reason that the guard interval consists of a copy of the end of the OFDM symbol is so that the receiver will integrate over an integer number of sinusoid cycles for each of the multi paths when it performs OFDM demodulation with the FFT.
Simplified equalization
The effects of frequency-selective channel conditions, for example fading caused by multipath propagation, can be considered as constant (flat) over an OFDM sub-channel if the sub-channel is sufficiently narrow-banded, i.e. if the number of sub-channels is sufficiently large. This makes equalization far simpler at the receiver in OFDM in comparison to conventional single-carrier modulation. The equalizer only has to multiply each detected sub-carrier (each Fourier coefficient) by a constant complex number, or a rarely changed value.
Importance of channel coding
Channel coding is used in most cases of digital communication and especially in case of mobile communication. Channel coding implies that each bit of information to be transmitted is spread over several, often very many, code bits. If these coded bits are then, via modulation symbols, mapped to a set of OFDM subcarriers that are well distributed over the overall transmission bandwidth of the OFDM signal, each information bit will experience frequency diversity in case of transmission over a radio channel that is frequency selective over the transmission bandwidth, despite the fact that the subcarriers, and thus also the code bits, will not experience any frequency diversity. Thus, in contrast to the transmission of a single wideband carrier, channel coding (combined with frequency interleaving) is an essential component in order for OFDM transmission to be able to benefit from frequency diversity on a frequency-selective channel.
OFDM for Access control
OFDM can also be used as a user-multiplexing or multiple-access scheme, allowing for simultaneous frequency-separated transmissions to/from multiple mobile terminals. In the downlink direction, OFDM as a user-multiplexing scheme implies that, in each OFDM symbol interval, different subsets of the overall set of available subcarriers are used for transmission to different mobile terminals. Similarly, in the uplink direction, OFDM as a user-multiplexing or multiple access scheme implies that, in each OFDM symbol interval, different subsets of the overall set of subcarriers are used for data transmission from different mobile terminals.
Issues
A drawback of OFDM modulation, as well as any kind of multi-carrier transmission, is the large variations in the instantaneous power of the transmitted signal. Such power variations imply a
reduced power-amplifier efficiency and higher power-amplifier cost. This is especially critical for the uplink, due to the high importance of low mobile-terminal power consumption and cost. Several methods have been proposed on how to reducethe large power variations of an OFDM signal. However, most of these methods have limitations in terms of to what extent the power variations can be reduced. Furthermore, most of the methods also imply a significant computational complexity and/or a reduced link performance.
All the world is now at our finger tips, thanks to the telecommunication revolution! Know about the latest technologies in Telecommunication industry, through the telecomblog!
Monday, December 21, 2009
Tuesday, December 8, 2009
Femtocells - The new cellular concept
Femtocells are destined to transform the way mobile operators build their cellular networks and grow their coverage and capacity. Femtocells are small base stations operating in the usual, licensed bands. They are very small, require very low power transmitters and can be even placed in individual homes. This concept is very different from the usual cellular concept.
Advantages
Most important advantage is the improved coverage. Since the target is only a small set of users, providing them reliable coverage and a larger bandwidth is easy. Another advantage is the cost saving involved. The need to upgrade the capacity with increasing subscriber base is easily met with femtocells.
Issues
been significant.
Advantages
Most important advantage is the improved coverage. Since the target is only a small set of users, providing them reliable coverage and a larger bandwidth is easy. Another advantage is the cost saving involved. The need to upgrade the capacity with increasing subscriber base is easily met with femtocells.
Issues
- New levels of Interference mitigation and management is required, including macro to femto, femto to femto, and femto/handset interference.
- System selection, Integration with the operator’s core network and access control becomes all the more complicated.
- Security related aspects need to be taken care of.
- Network and frequency planning will be more sophisticated.
been significant.
Sunday, September 6, 2009
Next generation Mobile WiMAX
WiMAX, meaning Worldwide Interoperability for Microwave Access, is a telecommunications technology that provides wireless transmission of data using a variety of transmission modes, from point-to-multi-point links to portable and fully mobile internet access. Mobile WiMAX enables the convergence of mobile and fixed broadband networks through a common wide-area radio-access technology and flexible network architecture. The next-generation mobile WiMAX will be capable of data-transfer rates in excess of 1 Gbps. It is expected to support a wide range of high quality and high capacity IP-based services and applications while maintaining full backward compatibility with the existing mobile WiMAX systems.
Architecture features
IEEE 802.16m (new version of the 802.16) uses OFDMA as the multiple access scheme in the DownLink and UpLink. It supports both time-division duplex (TDD) and frequency-division duplex (FDD) schemes including the half-duplex FDD (HFDD) operation of the mobile stations in the FDD networks. The frame structure attributes and base-band processing are common for both duplex schemes. The modulation schemes supported include quadrature-phase shift-
keying (QPSK), 16-QAM, and 64-QAM.To overcome the issue of performance of adaptive modulation, a constellation rearrangement scheme is utilized. The next generation mobile Wimax suppports advanced multi-antenna techniques like single and multiuser MIMO, alongwith various transmit diversity schemes. The MAC features are an extension of the existing standard.
The next gen system is designed to provide state-of-the-art mobile broadband wireless access in the next decade and to satisfy the growing demand for advanced wireless multimedia applications and services.
Architecture features
IEEE 802.16m (new version of the 802.16) uses OFDMA as the multiple access scheme in the DownLink and UpLink. It supports both time-division duplex (TDD) and frequency-division duplex (FDD) schemes including the half-duplex FDD (HFDD) operation of the mobile stations in the FDD networks. The frame structure attributes and base-band processing are common for both duplex schemes. The modulation schemes supported include quadrature-phase shift-
keying (QPSK), 16-QAM, and 64-QAM.To overcome the issue of performance of adaptive modulation, a constellation rearrangement scheme is utilized. The next generation mobile Wimax suppports advanced multi-antenna techniques like single and multiuser MIMO, alongwith various transmit diversity schemes. The MAC features are an extension of the existing standard.
The next gen system is designed to provide state-of-the-art mobile broadband wireless access in the next decade and to satisfy the growing demand for advanced wireless multimedia applications and services.
Sunday, July 26, 2009
MIMO - The Basics
Multiple-input multiple-output (MIMO) systems are today regarded as one of the most promising research areas of wireless communications. This is due to the fact that a MIMO channel can offer a significant capacity gain over a traditional single-input single-output (SISO) channel.
MIMO overview
MIMO is effectively a radio antenna technology as it uses multiple antennas at the transmitter and receiver to enable a variety of signal paths to carry the data, choosing separate paths for each antenna to enable multiple signal paths to be used. The increase in spectral efficiency offered by MIMO systems is based on the utilization of space (or antenna) diversity at both the transmitter and the receiver.
It is found between a transmitter and a receiver, the signal can take many paths. Additionally by moving the antennas even a small distance the paths used will change. The variety of paths available occurs as a result of the number of objects that appear to the side or even in the direct path between the transmitter and receiver. Previously these multiple paths only served to introduce interference. By using MIMO, these additional paths can be used to increase the capacity of a link.
Basic concept of MIMO wireless schemes
One of the core ideas behind MIMO wireless systems space-time signal processing in which time (the natural dimension of digital communication data) is complemented with the spatial dimension inherent in the use of multiple spatially distributed antennas, i.e. the use of multiple antennas located at different points. Accordingly MIMO wireless systems can be viewed as a logical extension to the smart antennas that have been used for many years to improve wireless.
MIMO summary
As a result of the use multiple antennas, MIMO wireless technology is able to considerably increase the capacity of a given channel while still obeying Shannon's law. By increasing the number of receive and transmit antennas it is possible to linearly increase the throughput of the channel with every pair of antennas added to the system. This makes MIMO wireless technology one of the most important wireless techniques to be employed in recent years. As spectral bandwidth is becoming an ever more valuable commodity for radio communications systems, techniques are needed to use the available bandwidth more effectively. MIMO wireless technology is one of these techniques.
MIMO overview
MIMO is effectively a radio antenna technology as it uses multiple antennas at the transmitter and receiver to enable a variety of signal paths to carry the data, choosing separate paths for each antenna to enable multiple signal paths to be used. The increase in spectral efficiency offered by MIMO systems is based on the utilization of space (or antenna) diversity at both the transmitter and the receiver.
It is found between a transmitter and a receiver, the signal can take many paths. Additionally by moving the antennas even a small distance the paths used will change. The variety of paths available occurs as a result of the number of objects that appear to the side or even in the direct path between the transmitter and receiver. Previously these multiple paths only served to introduce interference. By using MIMO, these additional paths can be used to increase the capacity of a link.
Basic concept of MIMO wireless schemes
One of the core ideas behind MIMO wireless systems space-time signal processing in which time (the natural dimension of digital communication data) is complemented with the spatial dimension inherent in the use of multiple spatially distributed antennas, i.e. the use of multiple antennas located at different points. Accordingly MIMO wireless systems can be viewed as a logical extension to the smart antennas that have been used for many years to improve wireless.
MIMO summary
As a result of the use multiple antennas, MIMO wireless technology is able to considerably increase the capacity of a given channel while still obeying Shannon's law. By increasing the number of receive and transmit antennas it is possible to linearly increase the throughput of the channel with every pair of antennas added to the system. This makes MIMO wireless technology one of the most important wireless techniques to be employed in recent years. As spectral bandwidth is becoming an ever more valuable commodity for radio communications systems, techniques are needed to use the available bandwidth more effectively. MIMO wireless technology is one of these techniques.
Thursday, June 11, 2009
The Race to 4G....
A long-term battle is brewing between two emerging high-speed wireless technologies, WiMax and Long Term Evolution (LTE). Each would more than quadruple existing wireless wide-area access speeds for users. Both are 4G technologies designed to move data rather than voice. Both are IP networks based on OFDM technology
The two technologies are somewhat alike in the way they transmit signals and even in their network speeds. The meaningful differences have more to do with politics - specifically, which carriers will offer which technology.
The Genesis
WiMax is based on a IEEE standard (802.16).It’s an open standard that was debated by a large community of engineers before getting ratified.The level of openness means WiMax equipment is standard and therefore cheaper to buy!
LTE or Long Term Evolution is a 4G wireless technology and is considered the next in line in the GSM evolution path after UMTS/HSPDA 3G technologies. LTE is espoused and standardized via the 3GPP or 3rd Generation Partnership Project members. 3GPP is a global telecommunications consortium having members in most GSM dominant countries.
LTE vs WiMAX
Whereas WiMAX emerged from the WiFi IP paradigm, LTE is a result of the classic GSM technology path. LTE is behind in the race to 4G with WiMAX getting an early lead with the likes of Sprint ClearWire and several operators in Asia opting to go with WiMAX in the near term. So where WiMAX has a speed to market advantage, LTE has massive adoption and GSM parenthood to back it up.
LTE will take time to roll out, with deployments reaching mass adoption by 2012 . WiMax is out now, and more networks should be available later this year.
Speed offered
LTE will be faster than the current generation of WiMax as per well known text books, but 802.16m that should be ratified this year, offers similar speeds.The speeds expected by both LTE and WiMax are hard to nail down primarily because the technologies are just rolling out. But many factors will have to be taken into consideration.Speed to an end user is also dependent on how many users are connected to a cell tower, how far away they are, what frequency is used, the processing power of the user's device, and other factors.
Who will win?
For end users, the current debate over WiMax vs. LTE is largely theoretical but is nonetheless important.Analysts see a clear dominance by LTE in a few years, since so many carriers are bound to adopt it. However, that won't serve every user or every company. It is still going to be a combination of technologies and developers, WiMax may be one of those; but not the only one!!!
The two technologies are somewhat alike in the way they transmit signals and even in their network speeds. The meaningful differences have more to do with politics - specifically, which carriers will offer which technology.
The Genesis
WiMax is based on a IEEE standard (802.16).It’s an open standard that was debated by a large community of engineers before getting ratified.The level of openness means WiMax equipment is standard and therefore cheaper to buy!
LTE or Long Term Evolution is a 4G wireless technology and is considered the next in line in the GSM evolution path after UMTS/HSPDA 3G technologies. LTE is espoused and standardized via the 3GPP or 3rd Generation Partnership Project members. 3GPP is a global telecommunications consortium having members in most GSM dominant countries.
LTE vs WiMAX
Whereas WiMAX emerged from the WiFi IP paradigm, LTE is a result of the classic GSM technology path. LTE is behind in the race to 4G with WiMAX getting an early lead with the likes of Sprint ClearWire and several operators in Asia opting to go with WiMAX in the near term. So where WiMAX has a speed to market advantage, LTE has massive adoption and GSM parenthood to back it up.
LTE will take time to roll out, with deployments reaching mass adoption by 2012 . WiMax is out now, and more networks should be available later this year.
Speed offered
LTE will be faster than the current generation of WiMax as per well known text books, but 802.16m that should be ratified this year, offers similar speeds.The speeds expected by both LTE and WiMax are hard to nail down primarily because the technologies are just rolling out. But many factors will have to be taken into consideration.Speed to an end user is also dependent on how many users are connected to a cell tower, how far away they are, what frequency is used, the processing power of the user's device, and other factors.
Who will win?
For end users, the current debate over WiMax vs. LTE is largely theoretical but is nonetheless important.Analysts see a clear dominance by LTE in a few years, since so many carriers are bound to adopt it. However, that won't serve every user or every company. It is still going to be a combination of technologies and developers, WiMax may be one of those; but not the only one!!!
Wednesday, May 6, 2009
A faster Internet?
The Internet is founded on a very simple premise: shared communications links are more efcient than dedicated channels that lie idle much of the time. And so we share. We share local area networks at work and neighborhood links from home. And then we share again—at any given time, a terabit backbone cable is shared among thousands of users surfng the Web, downloading videos.. But there’s a profound flaw in the protocol that governs how people share the Internet’s capacity. The protocol allows you to seem to be polite, even as you elbow others aside, taking far more resources than they do.
You might be shocked to learn that the designers of the Internet intended that your share of Internet capacity would be determined by what your own software considered fair. They gave network operators no mediating role between the conflicting demands of the Internet’s hosts. The Internet’s primary sharing algorithm is built into the Transmission Control Protocol, a routine on your own computer that most programs . TCP is one of the twin pillars of the Internet, the other being the Internet Protocol, which delivers packets of data to particular
addresses. The two together are often called TCP/IP.
Forcing the way!
TCP routine constantly increases your transmission rate until packets fail to get through!Then TCP very politely halves your bit rate. The mechanism is termed "binary exponential back-off". What a name isn't it? All other TCP routines around the Internet behave in just the same way, in a cycle of taking, then giving, that fills the pipes while sharing them equally.
Fair play?
An equal bit rate for each data flow is likely to be extremely unfair, by any realistic definition. It’s like insisting that boxes of food rations must all be the same size, no matter how often each person returns for more or how many boxes are taken each time. But any programmer can just run the TCP routine multiple times to get multiple shares. It’s much like getting around a food-rationing system by duplicating ration coupons. This trick has always been recognized as a way to sidestep TCP’s rules—the frst Web browsers opened four TCP connections!
The solution!
There’s a far better solution- according to Bob Briscoe. It would allow light browsing to go blisteringly fast but hardly prolong heavy downloads at all. The solution comes in two parts. It begins by making it easier for programmers to run TCP multiple times—a deliberate break from TCP-friendliness. They set a new parameter—a weight—so that whenever your data
comes up against others all trying to get through the same bottleneck, you’ll a share of the total. The key is to set the weights high for light interactive usage, like surfing the Web, and low for heavy usage, such as movie downloading.
Imagine a world where some Internet service providers offer a deal for a fat price but with a monthly congestion-volume allowance. Note that this allowance doesn’t limit downloads as such; it limits only those that persist during congestion. If you used a peer-to-peer program like BitTorrent to download 10 videos continuously, you wouldn’t bust your allowance so long as your TCP weight was set low enough. Your downloads would draw back during the brief moments when flows came along with higher weights. But in the end, your video downloads would finish hardly later than they do today.
Reference
You might be shocked to learn that the designers of the Internet intended that your share of Internet capacity would be determined by what your own software considered fair. They gave network operators no mediating role between the conflicting demands of the Internet’s hosts. The Internet’s primary sharing algorithm is built into the Transmission Control Protocol, a routine on your own computer that most programs . TCP is one of the twin pillars of the Internet, the other being the Internet Protocol, which delivers packets of data to particular
addresses. The two together are often called TCP/IP.
Forcing the way!
TCP routine constantly increases your transmission rate until packets fail to get through!Then TCP very politely halves your bit rate. The mechanism is termed "binary exponential back-off". What a name isn't it? All other TCP routines around the Internet behave in just the same way, in a cycle of taking, then giving, that fills the pipes while sharing them equally.
Fair play?
An equal bit rate for each data flow is likely to be extremely unfair, by any realistic definition. It’s like insisting that boxes of food rations must all be the same size, no matter how often each person returns for more or how many boxes are taken each time. But any programmer can just run the TCP routine multiple times to get multiple shares. It’s much like getting around a food-rationing system by duplicating ration coupons. This trick has always been recognized as a way to sidestep TCP’s rules—the frst Web browsers opened four TCP connections!
The solution!
There’s a far better solution- according to Bob Briscoe. It would allow light browsing to go blisteringly fast but hardly prolong heavy downloads at all. The solution comes in two parts. It begins by making it easier for programmers to run TCP multiple times—a deliberate break from TCP-friendliness. They set a new parameter—a weight—so that whenever your data
comes up against others all trying to get through the same bottleneck, you’ll a share of the total. The key is to set the weights high for light interactive usage, like surfing the Web, and low for heavy usage, such as movie downloading.
Imagine a world where some Internet service providers offer a deal for a fat price but with a monthly congestion-volume allowance. Note that this allowance doesn’t limit downloads as such; it limits only those that persist during congestion. If you used a peer-to-peer program like BitTorrent to download 10 videos continuously, you wouldn’t bust your allowance so long as your TCP weight was set low enough. Your downloads would draw back during the brief moments when flows came along with higher weights. But in the end, your video downloads would finish hardly later than they do today.
Reference
- http://www.cs.ucl.ac.uk/staf/bbriscoe/projects/refb/
- www.spectrum. ieee.org
Subscribe to:
Posts (Atom)