scispace - formally typeset
Open AccessJournal ArticleDOI

A Measurement Study of a Large-Scale P2P IPTV System

TLDR
In this paper, an in-depth measurement study of one of the most popular P2P IPTV systems, namely, PPLive, has been conducted, which enables the authors to study the global characteristics of the mesh-pull peer-to-peer IPTV system.
Abstract
An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.

read more

Content maybe subject to copyright    Report

A Measurement Study of a Large-Scale P2P IPTV System
Xiaojun Hei
, Chao Liang
, Jian Liang
, Yong Liu
and Keith W. Ross
Department of Computer and Information Science
Department of Electrical and Computer Engineering
Polytechnic University, Brooklyn, NY, USA 11201
Abstract
An emerging Internet application, IPTV, has the potential
to flood Internet access and backbone ISPs with massive
amounts of new traffic. We recently measured 200,000 IPTV
users for a single program, receiving at an aggregate simulta-
neous rate of 100 gigabits/second. Although many architec-
tures are possible for IPTV video distribution, several chunk-
driven P2P architectures have been successfully deployed
in the Internet. In order to gain insight into chunk-driven
P2P IPTV systems and the traffic loads they place on ISPs,
we have undertaken an in-depth measurement study of one
of the most popular IPTV systems, namely, PPLive. We
have developed a dedicated PPLive crawler, which enables
us to study the global characteristics of the chunk-driven
PPLive system. We have also collected extensive packet
traces for various different measurement scenarios, includ-
ing both campus access network and residential access net-
works. The measurement results obtained through these plat-
forms bring important insights into IPTV user behavior, P2P
IPTV traffic overhead and redundancy,peer partnershipchar-
acteristics, P2P IPTV viewing quality, and P2P IPTV design
principles.
1 Introduction
With the widespread adoption of broadband residential ac-
cess, IPTV may be the next disruptive IP communication
technology [11]. With potentially hundreds of millions of
users watching streams of 500 kbps or more, IPTV would
not only revolutionize the entertainment and media indus-
tries, but could also overwhelm the Internet backbone and
access networks with traffic. Given this possible tidal wave
of new Internet traffic, it is important for the Internet research
community to acquire an in-depth understanding of the de-
livery of IPTV, particularly for the delivery architectures that
hold the greatest promise for broad deployment in the near
future.
There are several classes of delivery architectures for
IPTV, including native IP multicast [14], application-level
infrastructure overlays such as those provided by CDN com-
panies [1, 19], peer-to-peer multicast trees such as in end-
system multicast [12], and chunk-drivenP2P streaming such
as CoolStreaming [26] and PPLive [6]. Each of these archi-
tectures classes imposes different traffic patterns and design
challenges on Internet backbone and access networks. Re-
quiring minimal infrastructure, P2P architectures offer the
possibility of rapid deployment at low cost.
In terms of the number of simultaneous users, the most
successful IPTV deployments to date have employed chunk-
driven P2P streaming architectures. Bearing strong similari-
ties to BitTorrent [13], chunk-driven P2P architectures have
the following characteristics:
1. A television channel is divided into media chunks (e.g.,
each chunk consisting of one second of media data) and
is made available from an origin server.
2. A host, interested in viewing a particular channel, re-
quests from the system a list of hosts currently watching
the channel. The host then establishes partner relation-
ships (TCP connections) with a subset of hosts on the
list.
3. Each host viewing the channel buffers and shares
chunks with other hosts viewing the same channel. In
particular, a host periodically receives buffer maps from
each of its current partners. The buffer map indicates
the chunks the partner currently has available. Using a
scheduling algorithm, the host requests from its partners
the chunks that it will need in the near future.
4. As in BitTorrent, each host continually seeks new part-
ners that deliver data at higher rates than its existing
partners.
An important characteristic of chunk-driven P2P algorithms
is the lack of an (application-level) multicast tree - a charac-
teristic particularly desirable for the highly dynamic, high-
churn P2P environment [26]. Although these chunk-driven
algorithms have similarities with BitTorrent, BitTorrent in it-
self is not a feasible delivery architecture, since it does not
account for the real-time needs of IPTV.
Several chunk-driven P2P streaming systems have been
successfully deployed to date, accommodating thousands of
simultaneous users. Almost all of the these deployments
have originated from China (including Hong Kong). The pi-
oneer in the field, CoolStreaming, reported that more than
1

4, 000 simultaneous users in 2003. More recently, a num-
ber of second-generation chunk-driven P2P systems have re-
ported phenomenal success on their Web sites, advertising
tens of thousands of simultaneous users who watch channels
at rates between 300 kbps to 1 Mbps. These systems include
PPLive [6], ppStream [7], VVSky [9], TVAnts [8] and FeiD-
ian [4].
Given the success to date of many of these IPTV systems,
as well as their potential to swamp the Internet with massive
amounts of new traffic in the near future, we have been moti-
vated to carry out an extensive measurement study on one of
the chunk-driven P2P streaming systems, namely, PPLive.
We chose PPLive as it is currently one of the most popu-
lar if not the most popular IPTV deployment to date.
In particular, as part of a preliminary study we performed
on PPLive, we measured the number of simultaneous users
watching a PPLive broadcast of the annual Spring Festival
Gala on Chinese New Year on January 28, 2006. We ob-
served that PPLive broadcasted this event to over 200,000
users at bit rate in the 400-800 kbps range, corresponding to
an aggregate bit rate in the vicinity of 100 gigabits/sec!
In an earlier workshop paper, we reported preliminary
measurement results for PPLive [17]. The current paper
goes significantly further, providing a comprehensive study
of PPLive, including insights into the global properties of the
system. Achieving these deeper insights has been challeng-
ing because the PPLive protocol is proprietary. In particular,
in order to build the measurement tools that were used to col-
lect much of the data in this paper, we had to analyze a large
portion of the PPLive protocol.
In this paper, we seek to answer the following questions
about a large-scale P2P IPTV deployment:
What are the user characteristics? For both popular and
less-popular PPLive channels, how does the number of
users watching a channel vary with time? As with tra-
ditional television, are there diurnal variations in user
demand? What are the dynamics of user churn? What
is the geographic distribution of the users, and how does
this distribution fluctuate over time.
How much overhead and redundant traffic is there?
What fraction of bytes a peer sends (or receives) is con-
trol data and what fraction is actual video data? What
fraction of the video traffic that a peer receives is redun-
dant traffic?
What are the characteristics of a peer’s partnerships
with other peers? How many partners does a peer have?
What are the durations of the partnerships? At what
rates does a peer download from and upload to its part-
ners? How are the partnerships different for a campus
peer and a residential peer? How do the partnerships
compare to those in BitTorrent?
How good is the viewing quality? What are the view-
ing quality metrics in an IPTV system? How well does
PPLive perform with respect to these metrics?
What are the fundamental requirements for a successful
chunk-driven P2P IPTV system? How does a P2P IPTV
system maintain high enough downloading rates on all
peers with heterogeneous uploading capacities? What
is the video buffering requirement for smooth playback
on individual peers in the face of rate fluctuations on
peering connections and peer churns?
In this paper, we attempt to answer these questions by us-
ing a custom-designed PPLive crawler and using packet snif-
fers deployed at both high-speed campus access and broad-
band residential access points. Quantitative results obtained
in our study bring light to important performance and design
issues of live streaming over the public Internet.
This paper is organized as follows. We conclude this sec-
tion with an overview of related P2P measurement work. In
Section 2, we provide an overview of different aspects of
PPLive including architecture, signal and management pro-
tocols based on our measurement studies. The design and
development of the tools are presented in details in Section
3. Our measurement tools include an active crawler and a
passive sniffer. In Section 4, using our PPLive crawler, we
present the global-scale measurement results for the PPLive
network, including number of users, arrival and departure
patterns, and peer geographic distributions. In Section 5,
by sniffing monitored peers, we present the traffic patterns
and peering strategies as viewed by residential and campus
PPLive clients. In Section 6, we characterize the stream-
ing performance of PPLive, including playback freezing and
restoration, using our playback monitor. Finally, based on
our measurement results, we outline some design guidelines
for the successful deployment of IPTV application over the
Internet in Section 7.
1.1 Related P2P Measurement Work
To our knowledge, this paper (along with [17]) is the first
measurement study of a large-scale P2P streaming system.
There are, however, a number of recent measurement stud-
ies of other types of P2P systems, including file sharing,
content-distribution, and VoIP. For file sharing, Saroiu et al.
measured the Napster and Gnutella [23] and provided a de-
tailed characterization of end-user hosts in these two sys-
tems. Their measurement results showed significant het-
erogeneity and lack of cooperation across peers participat-
ing in P2P systems. Gummadi et al. monitored KaZaa
traffic [16] for characterizing KaZaa’s multimedia workload
and they showed locality-aware P2P file-sharing architec-
tures can achieve significant bandwidth savings. Ripeanu
et al. crawled the one-tier Gnutella network to extract its
overlay topology. For the latest two-tier Gnutella network,
Stutzbach et al. provided a detailed characterization of P2P
overlay topologies and their dynamics in [25]. Liang et al.
deployed active crawling in [20] to reveal in-depth under-
standing of KaZaa overlay structure and dynamics. In [21],
Liang et al. further demonstrated the existence of content
pollution and poisoning in KaZaa using an active crawler.
2

A measurement study was carried out for the live stream-
ing workload from a large content delivery network in [24].
For content distribution, Izal et al. and Pouwelse et al. re-
ported measurement results for BitTorrent [18] and [22]. For
VoIP, two measurement studies of Skype are available [10]
and [15]. A detailed protocol analysis of Skype was pre-
sented in [10] and Skype traffic pattern reported in [15] dif-
fers fundamentally from previous file-sharing P2P systems.
Performance evaluation of CoolStreaming was carried out
over PlanetLab [26] and the measurement results showed
that chunk-driven live streaming systems achieve significant
more continuous media playback than tree based systems.
2 Overview of PPLive
PPLive is a free P2P IPTV application. According to the
PPLive web site [6] in May 2006, PPLive provides 200+
channels with 400, 000 daily users on average. The bit rates
of video programs mainly range from 250 kbps to 400 kbps
with a few channels as high as 800 kbps. PPLive does not
own video content; the video content is mostly feeds from
TV channels in Mandarin. The channels are encoded in
two video formats: Window Media Video (WMV) or Real
Video (RMVB). The encoded video content is divided into
chunks and distributed to users through the PPLive P2P net-
work. The PPLive web site [6] provides limited informa-
tion about its proprietary technology. Through our measure-
ment studies and protocol analysis, however, we have gained
significant insight into the PPLive protocols and streaming
mechanisms. In order to gain a better understanding of our
measurement tools and results, in this section we provide an
overview of the PPLive operation. The overview also pro-
vides an introduction into the design of a chunk-based video
streaming system.
The PPLive software, running in user computers (peers),
has two major communication protocols: (i) a registration
and peer discovery protocol; and (ii) a P2P chunk distribu-
tion protocol. Figure 1 depicts an overview of the registra-
tion and peer discoveryprotocol. When an end-user starts the
PPLive software, it joins the PPLive network and becomes a
PPLive peer node. The first action (step 1) is an HTTP ex-
change with the PPLive Web site to retrieve a list of channels
distributed by PPLive. Once the user selects a channel, the
peer node registers with the bootstrap root servers and re-
quests a list of peers that are currently watching the channel
(step 2). The peer node then communicates with the peers
in the list to obtain additional lists (step 3), which it aggre-
gates with its existing list. In this manner, each peer main-
tains a list of other peers watching the channel. A peer on a
list is identified by its IP address and UDP and TCP service
port numbers. The registration and peer discovery protocol
is commonly running over UDP; however, if UDP fails (for
example, because of a firewall), PPLive will instead use TCP
for registration and peer discovery.
We now describe the chunk distribution protocol. At any
PC
channel list server
PC
PC
PC
peer list
root server
1
2
3
Figure 1: Channel and peer discovery
given instant, a peer buffers up to a few minutes worth of
chunks within a sliding window. Some of these chunks may
be chunks that have been recently played; the remaining
chunks are chunks scheduled to be played in the next few
minutes. Peers upload chunks to each other. To this end,
peers send to each other “buffer map” messages; a buffer
map message indicates which chunks a peer currently has
buffered and can share. The buffer map message includes
the offset (the ID of the first chunk), the length of the buffer
map, and a string of zeroes and ones indicating which chunks
are available (starting with the chunk designated by the off-
set). The offset field is of 4 bytes. For one PPLive channel
with the bit rate of 340 kbps and a chunk size of 14 KBytes,
this chunk range of 2
32
indicates the time range of 2042 days
without wrap-up. Figure 2 illustrates a buffer map. A peer
offset
0
gap
1
1
1
1
1
1
1
0
0
0
1
0
1
0
0
1
buffer map
time
...
...
Figure 2: A peer’s buffer map of video chunks
can request, over a TCP connection, a buffer map from any
peer in its current list of peers. After a peer A receives a
buffer map from peer B, peer A can request one or more
chunks that peer B has advertised in the buffer map. As
we will see in Section 5, a peer A may download chunks
from tens of other peers simultaneously. PPLive continu-
ally searches for new partners from which it can download
chunks. Since PPLive is proprietary, we do not know the ex-
act algorithm a peer uses for choosing partners and request-
ing chunks. Clearly, when a peer requests chunks, it should
give some priority to the missing chunks that are to be played
out first. Most likely, it also gives priority to rare chunks, that
is, chunks that do not appear in many of its partners’ buffer
maps (see [2] [26]). Peers can also download chunks from
the PPLive channel server. The chunks are sent over TCP
connections.
Having addressed how chunks are distributed among
peers, we now briefly describe the video display mecha-
3

nism. As mentioned above, PPLive works in conjunction
with a media player (either Windows Media Player or Re-
alPlayer). Figure 3 illustrates the interaction between the
PPLive peer software and the media player. The PPLive
engine, once having buffered a certain amount of contigu-
ous chunks, launches the media player. The media player
then makes an HTTP request to the PPLive engine, and
the PPLive engine responds by sending video to the media
player. The media player buffers the received video; when it
has buffered a sufficient amount of video content, it begins
to render the video.
Media Player
Internet
Media
User
Interface
Queue
S
PPLive Engine
Queue
streaming direcion
Figure 3: PPLive streaming process
If, during video playback, the PPLive engine becomes in-
capable of supplying the video player with data at a sufficient
rate (because the PPLive client is in turn not getting chunks
fast enough from the rest of the PPLive network), then the
media player will starve. When this occurs, depending on
the severity of the starvation, the PPLive engine may have
the media player wait where it left off (freezing) or it may
have the media player skip frames.
3 Measurement Methodologies
Our P2P network measurements fall into two categories: ac-
tive crawling and passive sniffing. The active crawling is
used to obtain a global view of the entire PPLive network for
any channel. Our crawler has a UDP component for collect-
ing peer lists from all the PPLive peers; and a TCP compo-
nent for collecting buffer maps from all the PPLive peers.
The passive sniffing is used to gain a deeper insight into
PPLive from the perspective of residential users and campus
users.
3.1 Active Crawling
To characterize the behavior of the entire PPLive network,
we developed a crawler which repeatedly probes all of the
PPLive nodes that are watching a specific channel. Building
the crawler was a major challenge in itself, since we needed
to implement portions of the PPLive proprietary protocol.
To this end, using packet traces from passive sniffing and our
knowledge about how chunk-drivenP2P streaming generally
operates, we were able to understand critical portions of the
PPLive’s signaling protocols. During a crawling experiment,
peer responses are recorded for online processing and off-
line analysis.
3.1.1 Harvesting Peer Lists
Recall from Section 2 that each peer watching a particular
channel maintains a peer list, which lists other peers cur-
rently watching the same channel. Also recall that any peer
A can send to any other peer B, within a UDP datagram,
a request for peer B’s peer list. The crawler, implementing
the PPLive protocol, sweeps across the peers watching the
channel and obtains the peer list from each visited peer. The
crawler does these three phases:
Peer Registration: The UDP crawler first registers it-
self with one of the root servers by sending out a peer
registration packet. The significant information in this
packet includes a 128 bit channel identifier, its IP ad-
dress, and its TCP and UDP service ports. In contrast
to many other popular P2P applications, a PPLive peer
does not maintain a fixed peer ID, but instead generates
a new, random value every time it re-joins the channel.
Bootstrap: After the registration, the crawler sends out
multiple bootstrap peer list queries to the peer-list root
server for peers enrolled in this channel. In response
to a single query, the server will return a list of peers
(normally 50 peers), including IP addresses and service
port numbers. The crawler aggregates all the lists it has
received, thereby maintaining its own list of peers en-
rolled in the channel.
Peer Query: After obtaining an initial peer list from the
root servers, the crawler sends peer list queries directly
to those peers from the initial list. Active peers will
return part of their own peer lists, which get added to
the crawler’s list.
PPLive clients are highly dynamic, joining and leaving
PPLive and switching between channels frequently. To ac-
count for the highly dynamic nature of the PPLiveclients, we
designed the crawler to periodically crawl the PPLive net-
work to track active peers once a minute. In particular, every
one minute, the crawler starts from scratch, with an empty
peer list. Furthermore, within each minute, the crawler is
only active for the first 15-seconds, during which it:
1. Obtains an initial peer list for the root server.
2. Sends peer queries to non-queried peers in the list.
3. Marks a peer active if it responds to the query with its
own peer list; expands the crawler’s peer list using the
lists received from active peers; returns to step (2) until
no new peers are obtained by peer queries.
Then we have finished one loop of probing. In our experi-
ments, it normally takes about 6 seconds to finish one loop.
After one loop, to include peers that joined the network dur-
ing the probing loop, the crawler goes back to the beginning
of the list and probes all the peers again to find new active
peers. We repeat the process until 15 seconds run up. The
crawler then records the active peers seen in this 15 seconds,
clears its memory, and goes to sleep until the beginning of
4

the next minute. In this manner we obtain a profile of the ac-
tive peers every minute. We note in passing that the crawler
doesn’t use a NAT traversalscheme. Peers behind NATs may
not set their NATs properly to receive peer queries from the
crawler. Our crawler therefore under-estimates the number
of active peers; in fact, in some of experiments we observed
that as many as 37% of the PPLive peers could be behind
NATs. Although NATs make it difficult to determine the ab-
solute number of users at any time, our measurement results
still enable us to report time evolutionary trends and also pro-
vide lower bounds on the number of users.
3.1.2 Harvesting Buffer Maps from Active Peers
To monitor the buffer content of PPLive clients, we aug-
mented the crawler with a TCP component that retrieves the
buffer map from the active peers during above crawling pro-
cess. To this end, as we crawl each peer, the crawler sends
a PPLive request for the peer’s buffer map. We then parse
the buffer maps off-line, to glean information about buffer
resources and timing issues at the remote peers throughout
the network.
3.2 Passive Sniffing
Passive sniffing captures the traffic exchanged between our
monitored peers and their partners in the PPLive network.
We collected multiple PPLive packet traces from four PCs:
two PCs connected to Polytechnic University campus net-
work with 100 Mbps Ethernet access; and two PCs con-
nected to residential networks through cable modem. Most
of the PPLive users today have either one of these two types
of network connections. The PCs with residential access
were located at Manhattan and Brooklyn in New York. Each
PC ran Ethereal [3] to capture all inbound and outbound
PPLive traffic. We built our own customized PPLive packet
analyzer to analyze the various fields in the various PPLive
signaling and content packets.
3.2.1 Playback Monitoring
In IPTV, user’s perceived quality is vital for a successful
service. As shown in Figure 3, the media player interacts
with the PPLive engine. Whenever the PPLive engine has
received playable media chunks, the PPLive engine streams
these media chunks to the player. When the media player
runs out of media contents, the player freezes, impacting
the user perceived quality. To trace the user playback per-
formance, we developed a simple PPLive playback moni-
tor. This monitor emulates the normal media playback pro-
cess and tracks the presentation time of the latest media
chunk. The difference between the actual playback time and
the latest media chunk presentation time indicates the size
of playable media size in the player. The monitor reports
an playback freeze whenever this time difference reaches 0.
During the playback freezing period, the monitor continues
receiving media content from the PPLive engine. When the
buffered content is above a threshold, which is specified by
the content source in the media file header, the monitor re-
ports a recovery from freeze.
4 Global Behavior of the PPLive Net-
work
In this section, we report global statistics for the PPLive net-
work collected by UDP component of the crawler. PPLive
hosts more than 200 different programs. To compare the
characteristics of different programs, we crawled two pro-
grams: XING, a popular channel with a 5-star popularity
grade: and GUANG, a less popular channel with a 1-star
popularity grade. The two programs were both crawled for
one entire day in April, 2006. The crawler crawled both
programs every minute as described in the previous section.
Based on the lists of the active peers at every minute of
the crawled programs, we calculated the number of users,
user arrivals and departures, and user geographic locations.
What’more, we also collected data during the Chinese New
Year to observe some user behavior.
4.1 Number of Participating Users
Figure 4 shows how the number of users evolves for both
crawled programs. Both sub-figures are labelled in US East-
ern Standard Time. We first observe that the numbers of par-
ticipating users are quite different for the two programs. The
maximum number of users for the popular program reaches
nearly 1, 700; however, that of the unpopular program is just
around 120. The diurnal trend is clear for both programs.
The major peaks appear during 9AM to 1PM EST, translat-
ing into 9PM to 1AM China local time. As we shall see,
those peaks are mostly contributed by users from China.
There are several smaller peaks scattered around 9PM to
5AM, translating into 9PM EST to 2AM WST. We believe
those are mostly due to users in US. We will show the user
geographic distribution in Section 4.3. This suggests that
people tend to use IPTV to watch TV programs outside of of-
fice hours, consistent with the behavior of regular TV users.
In contrast, a recent measurement study on Skype [15] sug-
gests that people tend to use VoIP service at work.
In Fig 5, we plot the evolution of the number of users for
the popular channel over one week. We can observe that
more people use PPLive during weekends than during week-
days. This again confirms that most users use IPTV in their
leisure time. As with many other P2P applications, the num-
ber of IPTV users is largely determined by the popularity of
the program. The annual Spring Festival Gala on Chinese
New Year is one of the most popular TV programs within
Chinese communities all over the world. Starting from 3AM
EST, January 28, 2006 (Chinese New Year Eve day), we ran
the crawler to collect all peer IP addresses from 14 PPLive
channels which were rebroadcasting the event. Figure 6 plots
5

Citations
More filters
Proceedings ArticleDOI

Understanding the impact of video quality on user engagement

TL;DR: This paper uses a unique dataset that spans different content types, including short video on demand, long VoD, and live content from popular video con- tent providers, to measure quality metrics such as the join time, buffering ratio, average bitrate, rendering quality, and rate of buffering events.
Journal ArticleDOI

Peer-to-peer systems

TL;DR: Within a decade, P2P has proven to be a technology that enables innovative new services and is used by millions of people every day.
Journal ArticleDOI

Challenges, design and analysis of a large-scale p2p-vod system

TL;DR: The challenges and the architectural design issues of a large-scale P2P-VoD system based on the experiences of a real system deployed by PPLive are discussed and a number of results on user behavior, various system performance metrics, including user satisfaction, are presented.
Journal ArticleDOI

A survey on peer-to-peer video streaming systems

TL;DR: The challenges and solutions of providing live and on-demand video streaming in P2P environment are described and tree, multi-tree and mesh based systems are introduced.
Journal ArticleDOI

Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast

TL;DR: The basic taxonomy of peer-to-peer broadcast is described and the major issues associated with the design of broadcast overlays are summarized, including the key challenges and open problems and possible avenues for future directions.
References
More filters

Incentives Build Robustness in Bit-Torrent

B. Cohen
TL;DR: The BitTorrent file distribution system uses tit-fortat as a method of seeking pareto efficiency, which achieves a higher level of robustness and resource utilization than any currently known cooperative technique.
Proceedings Article

A case for end system multicast

TL;DR: The potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred and the results indicate that the performance penalties are low both from the application and the network perspectives.
Journal ArticleDOI

Multicast routing in datagram internetworks and extended LANs

TL;DR: In this paper, the authors specify extensions to two common internetwork routing algorithms (distancevector routing and link-state routing) to support low-delay datagram multicasting beyond a single LAN, and discuss how the use of multicast scope control and hierarchical multicast routing allows the multicast service to scale up to large internetworks.
Proceedings ArticleDOI

CoolStreaming/DONet: a data-driven overlay network for peer-to-peer live media streaming

TL;DR: This paper presents DONet, a data-driven overlay network for live media streaming, and presents an efficient member and partnership management algorithm, together with an intelligent scheduling algorithm that achieves real-time and continuous distribution of streaming contents.
Proceedings ArticleDOI

An Analysis of the Skype Peer-to-Peer Internet Telephony Protocol

TL;DR: In this paper, the authors analyze Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What are the contributions in "A measurement study of a large-scale p2p iptv system" ?

In order to gain insight into chunk-driven P2P IPTV systems and the traffic loads they place on ISPs, the authors have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. The authors have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the chunk-driven PPLive system. 

Thus, in the future, some level of dedicated infrastructure ( such as dedicated proxy nodes ), may have to be combined with the P2P distribution to deliver videos at higher rates. 

Due to the distributed nature of chunk-driven P2P streaming, it is possible that a peer downloads duplicate chunks from multiple partners. 

There are several classes of delivery architectures for IPTV, including native IP multicast [14], application-level infrastructure overlays such as those provided by CDN companies [1, 19], peer-to-peer multicast trees such as in endsystem multicast [12], and chunk-driven P2P streaming suchas CoolStreaming [26] and PPLive [6]. 

Since the campus node uploads to many peers, the top peer video upload session only accounts for about 5% of the total video upload traffic. 

during video playback, the PPLive engine becomes incapable of supplying the video player with data at a sufficient rate (because the PPLive client is in turn not getting chunks fast enough from the rest of the PPLive network), then the media player will starve. 

Most likely, it also gives priority to rare chunks, that is, chunks that do not appear in many of its partners’ buffer maps (see [2] [26]). 

There are, however, a number of recent measurement studies of other types of P2P systems, including file sharing, content-distribution, and VoIP. 

In response to a single query, the server will return a list of peers (normally 50 peers), including IP addresses and service port numbers. 

For streaming applications in best-effort networks, start-up buffering has always been a useful mechanism to deal with the rate variations of streaming sessions. 

if a TCP connection carries video, it should have a large number (say, at least 10) of large size TCP segments (say, > 1200 bytes) during its lifetime.