State of IP Spoofing
We used several levels of aggregation to simplify and clarify the data
for these charts. For client IP addresses that run multiple tests with
conflicting results, we only use the most recent valid test, ignoring
tests that could not determine whether spoofing was possible.|
We mapped each IP address to its network prefix as seen in Route Views
BGP tables (manually collected from the route-views.routeviews.org text
dumps), and use the most recent 12 months of tests from IP
addresses within any given prefix. Prefixes in which all tested client
addresses result in the same status are labeled as "spoofable" or
"unspoofable"; prefixes with conflicting results from different IP
addresses are labeled "inconsistent". We extrapolate our results to
the entire announced address space by assigning each prefix's status to
every IP address covered by that prefix.
To infer the status of ASes, we count the status of each network prefix
a given AS announces into the BGP table, and compute the fraction of
prefixes that permit spoofing versus total tested prefixes from the AS
in question. The inconsistent ASes are subdivided into those with less
than half their nets considered spoofable (which are labeled "partly
spoofable") and those with at least half spoofable (which are labeled
|Top Ten Spoofer Test Results
|Attacks using randomly spoofed source IP addresses over time observed by the UCSD network telescope
These graphs plot the number of attacks that use randomly spoofed
source IP addresses over time, as observed by the UCSD telescope.
If the attacker chooses source IP addresses uniformly at random,
the telescope will receive backscatter from denial of service
attacks, which we can use to infer the attack volumes for each
victim. We use IP geolocation to infer the locations of victim
IP addresses. You can learn more about the methodology behind
the telescope by reading the related paper, and obtain a more interactive
view using the IODA view.
|Spoofing over last 6 months
This graph plots the spoofability of /24 prefixes and ASes
for the last 6 months, at a granularity of 1 day. In order to
compensate for the generally low rate of
testing (and to prevent visual clutter), all tests since 1 week
before the specified date are included in the spoofability calculation,
and all the "inconsistent" prefixes or ASes are considered
to be "spoofable". We do not use the same aggregation method as we do
with the pie charts, because we want to record changes within prefixes
and ASes instead of determining their current state.
See the graph for the lifetime of spoofer
|Source address filtering:
Each test run spoofs addresses from adjacent netblocks, beginning with
a direct neighbor (IP address + 1) all the way to an adjacent /8.
The following figure displays the granularity of source address filtering
(typically employed by service providers) along paths tested in our study. If
the filtering is occurring on a /8 boundary for instance, a client within that
network is able to spoof 16,777,215 other addresses.
Using the tracefilter mechanism, we measure
filtering depth; where along the tested path (from each client to the server),
filtering is employed. Depth represents the number of IP routers through
which the client can spoof before being filtered.
Client tests originate at an autonomous system, i.e. a service
provider. Here, we analyze the distribution of successful
spoofing in relation to the size of the provider.
Using DNS heuristics, we analyze the distribution of results
across different types of clients.
= Source address filtering in place
| Private || Routable || NAT ||Client Count
Each test run attempts to send IP packets with different
spoofed addresses in order
to infer provider filtering policies.
Private sources are those defined in
e.g. 10/8, 172.16/12, 192.168/16 prefixes.
Routable sources addresses are those
present in BGP routing tables.
NAT sources are unable to spoof through their NAT setup.
We assess the geographic distribution of clients in
our dataset both to measure the extent of our testing coverage as
well as to determine if any region of the world is more susceptible to
spoofing. We use CAIDA's
plot-latlong package to generate
|Location of client tests
||Location of spoofable networks
Predictably, some percentage of machines will not be able to spoof IP
packets regardless of filtering policies. Some reasons are described
in our FAQ
. We exclude failed
clients from our summary results but characterize some of the underlying
reasons for failures that we are able to detect below:
Total Completely Failed Spoof Attempts: 68003
Failed as a result of being Behind a NAT: 63406
Failed as a result of (non-Windows) Operating System block: 543
Failed as a result of Windows XP SP2: 1765
Failed as a result of other reasons: 2289
We began IPv6 probing with version 0.8 of the tester client.
Unique IPv6 Sessions: 27108
Spoofing rate (routable IPv6): 0.079%
Spoofing rate (bogon IPv6): 0.057%
Spoofing rate (private IPv6): 0.041%
This report, provided by CAIDA
intends to provide a current aggregate view of ingress and egress
filtering and IP Spoofing on the Internet. While the data in this report
is the most comprehensive of its type we are aware of, it is still an
ongoing, incomplete project. The data here is representative
of the netblocks, addresses and autonomous systems (ASes) of clients
from which we have received reports. The more client reports we receive
the better - they increase our accuracy and coverage.
Download and run our
to automatically contribute a report to our database. Note that this
involves generating a small number of IP packets with spoofed source addresses
from your box. This has yet to trip any alarms or cause problems for
our contributors, but you run the software at your own risk. The software
generates a customized report displaying the filtering policies of your
Internet service provider(s).
Feedback, comments and bug fixes welcome; contact spoofer-info at caida.org.
This page is regenerated six times daily. Last generated Wed Oct 18 08:10:33 PDT 2017.
Individual clients are counted singly regardless of the number of tests