Wednesday, January 7, 2009

Notes from CCC 2008

While attending [CCC 2008](http://events.ccc.de/congress/2008/), I was making notes, as usual (see [DIMVA](http://carolina.mff.cuni.cz/~trmac/blog/2008/notes-from-dimva-2008/), [DeepSec](http://carolina.mff.cuni.cz/~trmac/blog/2007/notes-from-deepsec-2007/)). Slides for some talks are available on talk pages in the [Fahrplan](http://events.ccc.de/congress/2008/wiki/Fahrplan), or try to find a [recorded video](http://events.ccc.de/congress/2008/wiki/Documentation#Conference_Recording). I hope the notes are useful.



### "Nothing to hide"

* "Nothing to hide" does not make sense:
* Where are the conference organizers keeping the money?
* Who knows the passwords?
* We are somewhat "lucky": weak WiFi cryptography allows "anonymous" internet connections
* OLPC might be in violation of GPL because it does not give source code to children

### The Trust Situation

* Germany: if you don't know what others know about you, you might adjust your
behavior (fear what they know), which is considered in conflict with being a
"free person".
* surveillance unaviodable => data protection law instead
* people's decisions are irrational heuristics, reduced number of hear-say "facts",
bias towards own values / group values
* people fear data isn't protected, hide information pre-emptively
* permanent databases: people hide illnesses, students with bad records give
up completely
* to make surveillance/data protection understandable, make it simpler, make data protection more
drastic/visible
* that's impractical => either allow people to avoid surveillance, or give up the
pretense of determining who keeps the data


### Security Failures in Smart Card Payment systems
* UK process:
* card reports account #, ..., magnetic strip data copy
* receives transaction description, PIN
* returns PIN verification result (handles incorrect PIN counter internally), authorization code
* magnetic strip data copy + PIN can be used for a fall-back transaction with
a cloned card
* => PIN entry device should be temper-proof, are certified (Visa, EMV, PCI, ...);
VISA certification requirement: tampering requires >10 hours or >$25,000 per device
* tamper resistance protects bank's keys, not the PIN - which is transferred unencrypted
* Dione Xtreme tamper-resistance:
* tamper switch that wipes keys when device is opened, but easy to drill through in other places
* CPU in epoxy, no tamper detection
* Ingenico:
* tamper switch, protected by a steel plate with a sensor mesh around it
* "buttons" that wipe the battery when the front panel is removed
* meshes that detect PCB corruption
* CPU wrapped in a sensor mesh
* contains place for expansion cards, which can be used to hide a device;
* screw holes for fastening the expansion cards are a hole in the mesh, which can be used to tap the communication line
* or double-swipe => copy the mag stripe, then watch the PIN
* or relay the communication from a "restaurant" to e.g. a diamond shop
* possible improvements:
* no copy of magnetic strip on chip
* encrypted PIN communication - even if magnetic strip copy is already prohibited
* customer considered liable for PIN fraud per voluntary banking code =>incorrect incentives
* certification contracted for by the manufacturer => incorrect incentives
* [More about this research](http://www.cl.cam.ac.uk/research/security/banking/ped/)

### Hacking the iPhone

* "application processor", "baseband modem": separate security system for each
* application CPU:
* OS: lobotomized OS X (`launchd`, system libs); shell is "SpringBoard"
* `lockdownd` = bridge for connecting between computer and iPhone sockets
* `/` "read-only"; `/private/var` read-write: only a logical distinction by FS
* protection:
* 3rd party apps on user partition, signature necessary (verified by `execve()`), run as user `mobility` => cannot overwrite the system
* kernel signed, writable only by `root`
* boot process:
* boot RPM loads "LLB" from a NOR flash
* LLB loads image list, next stage, checks signature, runs iBoot
* iBoot: populates device tree, loads kernel, executes it; checks signatures on everything
* LLB is not signature checked
* boot abort:
* downloads root & kernel to ramdisk, uses it to reflash boot/kernel
* more signatures, hashes, encryption...
* when checking a recovery ramdisk signature, the boot ROM contains a buffer overflow => can be exploited to load any ramdisk => patch anything
* mistakes: gradual roll-out that allowed learning about the system (old versions unencrypted, trusting the PC, running as `root`...)
* baseband:
* boot: ROM=>boot loader => firmware; "Nucleus" kernel
* no recovery mode => can be bricked
* boot loader: allows serial payload if ROM is "blank", there is a signature check
* ROM checks boot loader's signature
* 3 entry points: normal/service/trusted module
* bugs in v1:
* address computation bugs allowing overwriting "protected" areas
* RSA implementaiton vulnerable to Bleichenbacher attack => can fake signatures
* v2: protected => patch loaded firmware in RAM instead of modifying the flash

### Advanced memory forensics: The Cold Boot Attacks
* [Research homepage](http://citp.princeton.edu/memory/)
* RAM is not designed to erase on power off - there's only a "noticeable probability" of bit flips
* cold boot attacks:
* deliberately crash a PC, then restore power to RAM and dump it
* wake a sleeping/hibernated laptop (to a password prompt), then crash and restore power to RAM
* tools to dump after a reboot:
* USB stick or network boot - to dump memory data
* PXE is untrusted => PXE may be used to boot a dumper, which sends it to a broadcast address => the receiver of the dump is not necessarily identifiable
* detecting key schedules, correcting bit errors:
* look at 8-byte aligned bits in memory, try to guess if it is a key; this can detect AES keys, supposedly regardless of implementation
* RSA: even if only 70..75% bits are preserved, the rest can be recovered
* cooling:
* only necessary when physically removing RAM from a PC (necessary if BIOS is "unfriendly" - e.g. clears RAM)
* simply restarting at room temperature never lost enough data to prevent key reconstruction
* even when moving RAM, cool canned air was enough, liquid nitrogen not necessary :)
* => bugs:
* BitLocker: basic mode writes key information to disk! => fully automated live CD
* BitLocker with TPM: does not use TPM for all encryption, key is still in memory
* countermeasures:
* turn laptops completely off when computer might get out of owner's control (when going through US customs ;) )
* require a password to boot external media (but chips can be simply moved to other computer)
* destroy all key material at screen lock, suspend, hibernate — this can require significant software changes
* ECC RAM: supposed to be cleared by a memory controller on boot, but that can be bypassed/reprogrammed to make the parity bits available => even more data available
* encrypted memory (for DRM..., storing keys in tamper-proof HW)
* film script writers clueless: removing DIMM modules with tweezers, cold boot attack a SIM card

### Why were we so vulnerable to the DNS vulnerability?
* flaw history:
* 1999: DJB: 16b is not enough => response: if the TTL is ~1day, there won't be enough opportunities to try guessing an ID
* there are many ways to get around the TTL defense
* if the attacker controls when the query happens, they can send the reply before the genuine one
* "in theory should not matter"
* practice:
* most web not encrypted
* e-mail
* other applications
* 41% of SSL certs self-signed
* most non-browser network clients don't even want a signed SSL certificate
* automatic updates assume DNS is safe
* SSL uses e-mail to authenticate certificate receivers
* "forgot my password" systems bypass SSL authentication entirely
* attack method:
* send an e-mail => DNS lookup for the receiver's domain, attacker knows domain IP and port used by the DNS server, poisons it
* "forgot my password" will send mail, poisoned => I'm an authenticated user => can insert PHP code => can own the box
* (connection to database host poisoned as well)
* not a bug in Drupal, ..., anywhere - only DNS
* => "need to stop using passwords and only use SSL client certificates"
* sucks, expensive to manage, fail in some use cases
* simply: passwords scale => they WILL be used
* analogically: DNS scales => it WILL be used
* DNS is a good at "federation" - managing a shared name space without conflicts and without trust between users/participants (competing companies)
* everything uses DNS to federate:
* e-mail's MX
* web's same origin policy
* SSL/x.509: supposedly distributed, federated, ...
* how do you know which root CA to trust?
* wildcard certificates difficult to acquire
* not actually independent on DNS: CN=dns.name
* password reset e-mails
* OpenID: uses same origin policy
* SSL: uses e-mail to check user owns a domain
* DNS is reasonably secure as such and on input, but not on output (replies are not authenticated)
* in practice: used for security ~everywhere because there's no alternative
* other problems in 2008:
* VPN software uses SSL, but does not validate the certs
* cookies received from `https://` are by default sent to `http://`, and almost nobody changes that
* almost nobody authenticates downloaded code (Java, OpenOffice, iTunes, ...)
* Debian RNGs
* SNPv3 flaw (Wes Hardakar): challenge-response protocol to authenticate users only verifies the first byte of the response!
* unifying traits:
* authentication problems
* all simple bugs
* fixes are hard to manage (dependencies, other customers, ...; "I love buffer overflows" because the fix is always local)
* the bugs "blend":
* DNS + SNMPv3 = MItM by reconfiguring a router to get the data to MItM
* bad DNS works around Java socket connection restrictions => can use Java applets to attack SNMPv3
* all are very old problems, age does NOT predict quality
* very slow and expensive to repair (DNS: nobody depends on the bug, 2 days to find, 8 months to fix; ~75% patched after a month)
* theory: if DNS was completely secure, some of the design bugs wouldn't matter
* VPN software: software must be possible to ship as a test gear, when customers don't have a SSL certificate, which would require communication between customer, vendor and certificate - too expensive and complex; but if DNS were secure, it could store an authentication hash
* DNS cannot tell you that a site is HTTPS-only, so you must place a redirect on `http://` - which can be MItM attacked
* automatic upgrades: would Just Work, but SSL is slow and does not scale => people use HTTP and screw up the authentication of the update (certificates...); DNS could be used to distribute hashes of new code
* storing PGP in DNS => secure e-mail
* don't blame business guys for poor design caused by needing to work-around DNS
* "DNS was not designed for putting things in DNS"
* DNS is already used for security - without secure DNS
* federation: nobody can prohibit putting these things in DNS
* => to do:
* figure out how to make DNSSEC scale
* migrate applications to use it
* how the fix was done:
* identify critical players - Paul Vixie contacted them...
* met in person - to force making a decision: Paul brought in engineers with decision-making capability
* agreed to synchronize the release to avoid attacks on slow fixers
* ground rules:
* must secure "all names"
* must secure all authoritative name servers, even if they don't do anything => must be done in the recursive name server
* must not alter DNS semantics (break anything): if only 1% fails, the patch will be rolled back
* easiest fix: DJB: source port randomization: slow, port conflicts when there are other services, problems with firewalls, some kernels don't handle too many sockets open
* alternatives:
* TTL - but there are >=15 variants of TTL bypass, not comprehensive and won't be (e.g.: query for unknown record type, flood with NXDOMAIN => DNS server must drop all caches => TTL bypassed)
* use TCP: very bad performance
* deploy "defense" only when an attack is observed - but DNS requests can be spoofed => defense becomes a DoS
* resolve 2 or 3 times - breaks akamai
* vary case in request, verify the reply is the same => extra bits - but a1.11111111 gives only 1 bit
* should there be a midpoint between transition to DNS? somebody working on it
* release handling:
* talk to personal e-mail providers, the most expected attack targets
* notified SSL CAs because they verify ownership using DNS
* autoupdaters: underestimated how many were broken
* testing tool:
* testing the DNS server used for HTTP, and for SMTP
* letting clients test without cross-side scripting: for images loaded from a foreign domain, the image size is available => a side channel for cross-domain information retrieval.
* current state:
* ~75% name servers updated, probably won't get much better
* don't know how many users are protected by this
* other bugs in trusting a gateway, there will be more?
* thoughts on DNSSEC:
* to be meaningful, root MUST be signed
* how to make key signing scale? DNSSEC requires a lot of manual manipulation
* registries and registrars must be able to do this without too much cost / "load"
* DNSCurve (DJB)
* requires on-line key signing (key available on DNS server), unline DNSSEC
* registrars don't have to do much
* no code
* new crypto: the proposed ECC is optimized for speed and non-standard
* not proven, not specified
* patent-encumbred
* per-query crypto (not required by DNSSEC) => 100% CPU at 1/3 load of current systems
* does not provide end-to-end trust:: secure only if directly talking to authority servers, not cachable => 100x load

### Lightning talks (Tuesday)
* lxde.org
* anomos.info ~ encrypted, pseudonymous BitTorrent (assuming a trusted tracker)
* privacyfoundation.de:
* german privacy foundation: caused by investigations against Tor admins,
* "Tor partnership program": foundation legally owns and runs the server (needs root password)
* training of police investigators
* openpgp card, with smart card reader required => "crypto stick" on USB; will release HW/SW as open source;
pre-order info@privacyfoundation.de, €30..40
* OLSR-ng: mesh routing daemon
* hacking botnets and stealing back stolen data
* "fresh" USB stick contains a bot
* ~0.95M infected IPs, ~145k users online at one time
* hackable1.org: hackable:1: OS for OpenMoko


### Climate Change - State of the Science (Stefan Rahmstorf)
* simple energy balance (long-term): solar radiation - reflection of solar radiation = back radiation (= earth's radiation out) => change incoming radiation, change reflectivity, or change back radiation
* orbital cycles:
* CO2 cyclically between 190..290 ppm in the past 4k years
* currently we're at >350ppm
* orbit changes => glacial cycles
* CO2 concentration rise caused by humans: we know how much we have emitted, the rise is only ~.5 of human emissions - the rest is mostly in the ocean, which is becoming acidic
* effects other than global mean temperature:
* more frequent heat waves
* precipitation changes (S Europe drying out)
* sea level higher by ~.5..1m
* cyclone strength possibly increasing
* what we want to do:
* target 2° increase - still supposed to be practical - by reducing emissions to ~50% by 2050
* a proposed plan for Europe, supposed to cost no more than now

### Attacking Rich Internet Applications
* DOM-based XSS:
* use values of DOM objects that can be influenced by the attacker, output them somewhere or `eval()` => XSS
* `document.URL`, `location`, `referrer`...
* "sinks": `javascript:` URIs, ...
* new "sinks": CSS 3 selectors can read data from the page, will be able to read data from other pages in HTML5; `` prefilling, style changes
* `document[`user_input`]` can access `document.cookie` (=> cookie stealing, session fixation); `[]` notation common in "packed" javascript
* IE8 XSS filter: stops injections into JavaScript strings, but assignments are still allowed
* injection into "nice" URLs: /path/to/my/Nice_name?... where $Nice_name = ../../../something
* `document.domain` controlling same-order policy
* client-side SQL injection on client SQL storage
* HTML injection inputs: facebook/myspace/IM/webmail, ...:
* `document.getElementById()` uses `name=` as well in IE; returns the first matching object if >1 is there
* `document.getElementsBy`{`Tag`,`ClassName`} in IE[67] accepts `id=`, `name=`
* control flow problems: unexpected condition evaluation ("is undefined" vs. "is 0", etc.)
* concurrency bugs: 1 thread per page, no support for locking; usually no shared state, but that might change
* browser-based DOM XSS:
* examining what cross-domain iframe can access in parent
* Fx 2.0:
* `frame.history.go()` overridable
* prohibited attempt to set frame.variable deletes it in the frame => can affect frame's javascript variables/control flow
* `window.`{`top`,`opener`,`parent`,`frames`} overwritable
* IE7:
* frame`.opener` can be overwritten, used e.g. by tinymce
* WebKit/Safari/Air:
* `__defineGetter__` on `history.`$something
* Opera: `window.top`
* RIA to subvert html5: "too much accessibility"
* ``, ``
* stealing it: set `window.onkeydown`, on `enterKey` read value
* force user to press "down" and Enter to get a value from user's history: JavaScript game using arrow keys and Enter
* CSS3: [attr^=val], [attr$=val], [attr*=val] matches parts of an attribute value, can be used to read attributes if we can control CSS
* Google Gears:
* user-controllable cache => allows cache poisoning, affects anything that caches the same data as soon as 1 page is XSSed
* e.g. `google-analytics.com` affects "most of the web"
* can run JS "threads" that load data from other domains => hosting user's plain text, or even correctly marked XML lets them run in your domain's context
* attacking Fx extensions: common vulnerabilities: `eval()` for JSON => network control implies code execution
* `opera:` scheme: all `opera:`* URLs have the same "origin"

### Vulnerability discovery in encrypted closed source PHP applications
* PHP bytecode: opcode, result, 2 operands, "extension", source code line #
* most bytecode encryptors don't remove the line #
* newer encryptors don't decrypt & pass to original executor, they contain a copy of the executor instead - but it still has the same structure & same dispatch tables
* find opcode table, patch it to record the opcodes, then reproduce the byte code
* encrypt all op codes => have recorded versions; guess "optimized op codes" defined in the encryption
* checking byte code is sometimes simpler than grep & inspect, more can be automated (including data flow analysis to find potentially dangerous constructs)

### TCP Denial of Service Vulnerabilities
* (originally wanted to find out what outfrost 24(?) found)
* original design: end-to-end intelligence, attacks within the network unlikely
* Paul Watson: TCP reset attacks: reset needs to guess a seq# within a window, which is quite easy with large windows
* countermeasure: random seq# and source port
* connection backlog: to limit kernel memory usage
* 2 separate queues: SYN, ESTABLISHED (waiting for `accept()`)
* connection flooding (not SYN flooding) depends on application behavior => applications must:
* `accept()` fast enough
* limit number of total connections - kernel doesn't! - should be based onactual (kernel's) memory consumption
* FIN_WAIT1: in kernel, can't be controlled from user space, timeout is >= 7 min
* timeout depends on round-trip time, which can e artificially enlarged => up to 16 min!
* peer can make sure the kernel's send queue is very large => very large kernel memory consumption
* if window size is 0, window probes are repeatedly sent, connection is never timed out!
* kernel: TCP_DEFER_ACCEPT in Linux: `accept()` only after data is available => if no data ever sent, stays in kernel queue
* congestion control: can trick a high-banswidth host into exhausting their bandwidth:
* send ACKs to prevent congestion detection
* ... even before receiving the packet to fake small RTT
* fixes:
* change TCP to prove the packets were received
* drop a segment from time to time to check if peer is truthful
* new possibilities:
* open a connection, reach optimal connection window; set window to 0 (this does not change the congestion window!)
* repeat N times
* open all windows at once
* a burst can indicate full queue, but not really congestion => "shrew DoS": can DoS by periodic bursts

### Banking Malware 101
* classical attack: only one credential per victim
* banking trojan: "somehow" get a keylogger to victims, collect many credentials per victim
* Nethell/Limbo trojan:
* IE "Browser helper object" using COM
* stores typed passwords, passwords stored "in the browser", cookies
* handles visual keyboards by sending screenshots around mouse click locations
* ZeuS/Wsnpoem/Zbot
* injects itself into various system processes (services.exe, ...)
* grabs form field contents, injects arbitrary HTML code (e.g. to store some fields in forms, to ask for more personal information)
* detection: uses specific mutex names
* others:
* MBR modification
* finding drop zones:
* honeypots, client-side honeypots: deliberately browsing the web to try to encounter malware
* malware analysis: cwsandbox: execute in controlled environment, observe
* some keyloggers only submit data after interesting data is collected or after IE is started => emulate human behavior
* statistics:
* gigabytes of collected data, 1.5GB the largest one
* drop zone lifetime ~2 months
* German targets: Volks- und Reisenbanken, OSMP, Stantander, Fiducia, ABN AmroBank
* underground prices: bank account: $10..$1k; credit cards: $.4...$20; full identities: $1..$15;, ...
* protecting oneself:
* patch quickly
* do not click on suspicious links, open suspicious attachments
* anti-virus
* Germany: mobile TAN (SMS with transaction description and TAN)
* indexed TANs vulnerable to MitM - seen in the wild
* in general, 2 factor auth helps
* [honeyblog.org](http://honeyblog.org/)

### Tricks: makes you smile
* SQL injection - reading data by using an expression that returns an ID:
`select * ... where id = `(injectable):
collect a set of valid IDs, inject (i_want_to_know & mask){1:id_1,2:id_2,3:id_3}
* TIOCSTI: when root runs untrusted binary via sudo as a local user, it can inject commands run as root
* copy&paste of commands from HTML: visible commands may contain "invisible" text pasted into plain-text; this is browser-dependent
* lazy DNS admin: if `local.`domain is 127.0.0.1, the same-origin policy can be circumvented
* interesting targets for local file inclusion attacks:
* `/var/log/`*, `/tmp/sme_session_file`, `/proc/`..`./fd/`... (used if you don't know file path), `/proc/`...`/environ` (affected by CGI headers), both especially with `/proc/self`
* some of that can have controlled content, which turns PHP inclusion into PHP execution

### Running your own GSM network
* network authenticates a mobile device, but the device does not authenticate a network
* don't try this: GSM in licensed spectrum :)
* network architecture:
* intelligence in the network, not end nodes
* GSM: TDMA => data sender/destination identified only by time slot
* MS = mobile station (phone)
* BTS = base transciever station - only a transciever, not "intelligent"; L1, parts of L2 - frame scheduling, ...; slave to BSC
* BSC = base station controller: most of decision making, controls BTSs, handles call handover between BSCs
* MSC = mobile switching center: call switching, "interworking" with ISDN/POTS, inter-BSC call handover
* HLR/VLR = home/visitor location register
* BSC<->BTS interface: A-bis: control E1 line, encoded voice data on other E1 lines (E1 = European ISDN primary rate interface: 32 TDM multiplex channels of 64 kbits)
* speech/data: 64kb/s split into 4 16kb/s subchannels
* speech: "full rate" 13kb/s (260b/20ms), "half rate", "enhanced speech codec", ...
* "full rate" packeted into 16kb/s (320b/20ms) stream of "TRAU packets"
* radio link layer: call control, mobility mgmt (roaming, handover, ...), radio resoure mgmt, SMS
* Siemens BS-11 microBTS: 2G; documentation under NDA, but 99.9% of A-bis is standard
* IMSI/IMEI skimming:
* phone will pick strongest network - not home network!
* can ask it for IMSI/IMEI, then reject them => they connect to their home network
* => can observe statistics of owner country, phone manufacturer
* "Egypt detection": GPS is illegal in Egypt
* => if a phone sees an Egypt BTS, it disables GPS
* => if a BTS identifies as an Egyptian provider, it disables GPS on phones!
* MITM attack: can get an incoming call - but where do you route it?
* for a "real" MITM you need very good control of a "mobile phone" component
* routing through e.g. ISDN fairly easy - but easier to trace back


### eVoting after Nedap and Digital Pen
* should be free, fair, general => need to be verifiable, transparent, secret:
* secret => free, no vote selling
* auditable => fair and honest
* transparent => does not rely on authorities to ensure fairness, adds legitimacy
* transparency supposed to be mandatory (OCSE), approaches:
* Germany: anyboty can observe election and counting (not necessary a citizen, ...) - only restricted for safety and public order
* Austria: parties can nominate 2 election witnesses per polling station
* UK: parties can nominate witnesses, others can register for observation
* paper voting "white box" - does not do any processing;
* e-voting: processing not observable, input is secret => output unauditable
* reasons for e-voting:
* cheaper
* "already spent the money on equipment"
* saves 1 hour of counting
* more complex systems (cumulative voting, preferential, ...) practical to implement
* fix suggestion: physical copies (paper trail, digital pen ...):
* what triggers recount, who decides what to audit, who has the copies? => fixes auditability, not transparency
* to ensure correctness, # of audited stations may be very large when results are close (Hesse 2008: difference of ~1 vote / station)
* what if they don't match? which one is binding?
* TEMPEST-proof printers impractical => problems with secrecy
* printer reliability, vendors object
* => suggest cryptography:
* voter can verify his vote was correctly counted, but can't prove his selection to others
* all ballots have unique ID, all encrypted votes are published, voter can verify his vote is on list
* does not protect against ballot stuffing
* does not allow external observers
* how many voters need to cooperate to prove fraud?
* if I know somebody won't check (e.g. by adding a trash and collecting receipts), can I change the vote?
* can the system be decrypted? by whom?
* what if vote ID actually identifies something about voter?
* what if multiple voters receive the same receipt? then there are free IDs for vote stuffing
* ThreeBallot:
* 3 columns per candidate, mark chosen twice, the others once
* machine verifies the ballot is valid
* get copy of 1 column chosen by the voter => each voter can verify 1/3 of their vote is correctly counted, we assume the counter does not know which 1/3 will be verified
* not coercion free: vote buyer can ask for a specific pattern, then verify the pattern was published
* serial #
* checker/copy mechanism is trusted
* user-unfriendly
* randomized partial checking:
* permute votes/"connection"/result:
* for each connection, publish one side of the connection => 50% chance of uncovering a vote flip for each vote => 2**(-n) probability of vote flipping
* [commitment protocol: can commit to a secret, publish a "verification", then can prove the secret]
* punchscan:
* individual vote sheets: random # for each candidate on top sheet + holes, random number in different order on bottom sheet => mark both top and bottom sheet, take 1 home, vote with the other - connected by serial #
* neither sheet allows proving a vote
* does not prevent coercion: with 2 candidates, can require voting for 1 with only 33% probability of successful opposite vote
* scantegrety 1,2
* bingo voting:
* prepare a random number for each (voter,candidate), commit to the selection
* voter selects candidate, trusted RNG generates a random #
* receipt with fresh random # for chosen candidate, # from committed list for others => can count unused random #s
* publish results, all receipts, unused dummy votes, randomized proof that ..[sommething]
* votes can be stolen if a RNG is controlled: can return two identical receipts for two votes with same candidate, then vote in any way instead => have to trust the RNG
* commitments must be shared, can observe where they were not downloaded...
* risk of insecure implementation
* if later discovered insecure, can't un-publish records => secrecy risk
* can't verify a system runs the code, ...
* voting authority can publish manipulated data => vote will be considered tampering
* too many different candidates in practice, implementation difficulties: mark 1836 rows once, 93 twice :), similar for other cases... => unusable exactly where e-voting is useful
* who actually understands the math and challenge/evaluate a challenge of the system?

### An introduction to new stream cipher designs
* stream cipher = something that generates a keystream (a sequence of random-looking data), which is used to XOR the cleartext
* input = key, IV
* security assumptions: attacker can choose IVs, knows algorithm
* main attacks:
* distinguishing: distinguishing keystream from true random source
* recover internal state of the cipher - or even the secret key
* RC4
* very fast, simple
* output can be distinguished from random,
* hard to use correctly,
* large state, not suitable for HW
* unspecified IV handling
* "don't use, there are better options"
* AES-CTR: "AES in counter mode":
* standard, reasonable resource requirements
* everyone tries to attack it
* rest:
* lots of bad poprietary ciphers:
* A5/[12] in GSM: broken
* E0 in bluetooth: broken
* MIFARE classic, LEELOQ: broken
* EU NESSIE: all broken
* eSTREAM: EU-funded research, to create something demonstrably superior to AES at least in 1 aspect
* 34 submitted ciphers, about half broken
* profile 1: for SW implementation: key >=128b, IV 64 or 128b
* HC-128: resembles RC4, reuses SHA-256, fast (2-4 cycles/byte, AES has 15-30), very slow for small inputs
* Rabbit: had patent issues, now public domain; RFC 4503, fast on long streams
* Salsa20/12: by djb
* SOSEMANUK: reuses SNOW 2 and SERPENT
* profile 2: for HW implementation in small embedded devices
* Grain v1: compact, can be unrolled X "tight security margin"
* MICEKY 2: slow, large
* Trivium: fast, large state, simple
* (F-FCSR-H: broken)
* NIST SHA-3 competition:
* 17/51 in the first round broken...

### Analyzing RFID Security
* challenge-response authentication:
* card->reader: ID, Random_c
* reader->card: Encrypt(Random_c), Random_r # authenticates reader
* card->reader: Encrypt(Random_c, Random_r) # authenticates card
* proxy/relay attack:
* MITM: emulate a reader for a card, and a card for a reader
* by measuring travel time one can limit attack distance to ~30m, but that's expensive and not implemented
* "always possible"
* emulation: spoof "unique" data such as card ID
* some "authentication" systems only use card ID, do not do any card auth!
* replay: if we can force a known challenge
* generating random numbers in RFID is rather difficult - no state, no memory
* Mifare: completely predictable random numbers (known LSFR)
* crypto attacks:
* brute force, try many keys at once, the same with SAT solvers
* Mifare: FPGA cluster brute forces key in 50 minutes!
* rainbow tables: trade off space vs. time: 48 bit (mifare) is very doable; anything below 64 bits possible
* algebraic attack: describe weak parts as equations, brute-force complex parts, solve the result using a SAT solver (MiniSAT) to get the key [e.g. guess a few bits of the key - most guesses will be unsatisfiable, then we can continue or just solve the equations]
* market overview - insufficient security: Mifare Classic, Hitag2, some Legic, some HID, Atmel CryptoRF, Atmel CryptoMemory
* proposed mitigation for Mifare Classic:
* signing: strongly authenticate data
* radio fingerprinting to detect emulation
* #1 enemy of RFID: vandalism (supposedly by people forced to use it?)

### DECT
* cordless phones, wireless ISDN access, baby phones, remotely controlled door openers, traffic lights(!); ~30m base stations in Germany
* FP = fixed part, PP = portable part; RFPI = radio FP identity, IPUI = international portable users identity; DSC = DECT standard cipher, DSAA = DECT standard authentication agorithm; UAK = user authentication key (shared between handset and base station)
* DECT: digital, 10 channels in EU, 5 in US, 24 time slots per channel (12 up, 12 down)
* scrambling: "data" xored with a LSFR output; LSFR public
* encryption: both control and data XORed with a DSC stream
* FP is broadcasting network info, scanning for PP activity
* PP: list of carriers, select beset carrier/slot, open connection,...
* sniffing: stations not synchronized, no packet source/destination, don't know on which channel/slot is used by PP to open connection, frame # must be known for descrambling => ~2GHz CPU required, €1000
* PCMCIA card ComOnAir: €23, low CPU requirements
* DECT security:
* phone authentication based on UAK, challenge, FP's challenge (to handle roaming)
* network authentication based on UAK, challenge, ...
* hardwired in phone (not chip card)
* encryption uses phone authentication data + IV
* all algorithms in DSAA hardwired, secret
* sometimes no authentication, no encryption at all
* sometimes network does not authenticate to phone
* sometimes authentication OK, but no encryption
* passive voice sniffing when no encryption is used: €23 total cost
* voice sniffing when encryption is used:
* impersonate a base station - most phones won't abort connection if the base station does not abort!
* need to know the base station's and phone's ID;
* DSAA algorithm: uses for authentication, key generation, UAK generation
* secret, reverse engineered: A12,A21,A22 simple wrappers around A11,
* A11: 4 different block cipher invocations
* "cassable" block cipher:
* 6 rounds, last does not use key => 5 effective rounds
* differential attack: with 16 chosen input-output pairs, 2^37 invocations (and other attacks)
* DSC: secret - but parts patented => public ... and mostly reverse-engineered
* UAK: 4-digit PIN, depends on RNG, ...; some implementations use very low-entropy RNG => impersonate handsets, decrypt calls, ...
* few DECT stacks actually used, equipment manufacturer's code can be used to identify
* prepaired phones: can read UAK from EEPROM? most have test contacts in battery case...

### Attacking NFC mobile phones
* RFID, modes:
* reader/writer; most often used
* card emulation
* peer-to-peer (two NFC devices)
* range: ~4 cm, data transfer up to 424 kb/s
* usage: touch tag with phone=> launch web browser, initiate voice call, send SMS, store vCard/vCal/note, set alarm, ..., launch custom application
* no link security (unecrypted wireless)
* NFC data exchange format:
* new subformat: SmartPoster = uri with title and optional icon, recommended action (execute, save for later, open for editing), ...
* Nokia xxx:
* reader always active unless phone in standby
* NFC tag handled by running application, or by phone
* application can register for specific data types (if not reserved type)
* NFC phones are typically not smart phones => limited SW, attacks mostly based on social engineering
* Mifare classic tag: 720 or 3408 bytes of payload
* per-sector configurable R/W mode, controlled by 48-bit keys
* attempts to write to sectors, brute-force keys: 10 keys/s
* Nokia: browser fetches the URL - ignores "action"
* URI spoofing: GUI mixes informational text and control data => can trick the user to open unintended URLs by storing a fake URL in the informational text, then padding it to hide the original URL.
* URL not displayed in the browser => easy to run a proxy that captures account info; only part displayed => can do ...@... (produces a broken HTTP request, but that can be worked around; can not use / in username@, need to use \)
* same for spoofing 0900 phone #
* sending SMS same, but the SMS URL and text is shown => can be noticed
* actually documented in the spec!
* most fixed in recent phones
* application can register for an URI => can intercept all tag read events for URI tags
* use writable tags to install MIDlets (JARs): no security warning on install, only on execution
* fuzzing:
* need manual moving the tag between writer and reader...
* phone switches off after 4 crashes in a row
* "survey":
* most services don't require an extra application, all use Mifare Classic 1k
* Wiener Linien: send a SMS to buy a ticket; tag read-only, but can stick a tag over it to send to other #
* Selecta vending machine: sending a SMS to pay: can take tag from other machine, copy it and put it on other machines, then wait for a snack paid for by other people ;)
* OBB-Handy ticket (trains): station encoded into URL => by replacing a tag can track users
* RMV Handy (Frankfurt): requires application install, NFC is non-essential; use custom record for station ID and name, public signature X signature only protects data, can copy the tag and put it elsewhere; URI for time table only visible if application is not installed
* ConTag tags are not "truly" read only
* can overwrite data if key B is broken
* unused sectors left unlocked => can change keys for these sectors, then store my data
* tag attacks: stick an attacker's tag over the original tag (shield the original off, or "fry" it); tag costs €1.2 in small quantities
* when storing a new value to a tag via a phone, old data is not cleared?
* DoS / discredit the system: write "problematic" content (phone crashers) on tags, stick them around
* Nokia Bluetooth imaging: send selected picture from phone to a Bluetooth device specified in the tag
* activates Bluetooth if disabled!
* simple MitM by replacing a tag with my bluetooth MAC, then forwarding the data to original
* Java/Nokia allow quite low-level access, even for talking to smartcards

### Why technology sucks
* "technology cannot solve problems created by politicians/accountants/..."
* "technology does not create problems, only make them more urgent"
* "ov-chipkaart": public transport in Netherlands
* paper tickets: revenue distribution does not reflect usage, nobody knows how many tickets are left unused, people afraid on the train (fare dodgers?)
* => chip card, works as an e-ticket: check in on entry, check out when leaving
* can observe each user traveling; supposedly have to keep records for tax reasons for 7 years => available to prosecutors
* logging of internet accesses...
* "smallmail": "e-mail replacement", connecting using TLS over Tor
* anonymous servers, accounts - using Tor
* suggestion: use more mailboxes (1 for each "group") on different servers to make correlation attacks more difficult
* smallsister.org

### Predictable RNG in the vulnerable Debian OpenSSL package
* predictable for 2 years
* can brute force:
* challenge-response auth (server depends on client's RNG security!):
* countermeasures: detect weak keys, remove them; Debian's ssh can reject them automatically
* MitM: if the server's cert is weak
* survey: ~3% of CA-signed certs are weak
* most browsers don't check for cert revocaiton (only IE in Vista)
* DH
* attacking Apache: each connection has a different state => need to capture _all_ connections in order to recover state
* DSA: depends on RNG security of _all_ signature operations

7 comments:

  1. Hi! I've just published my slides and all the other materials related to the "Tricks: makes you smile" talk:
    http://www.ush.it/2009/01/06/25c3-ccc-congress-2008-tricks-makes-you-smile/

    Bye and thanks for the review (-;
    ascii

    ReplyDelete
  2. Good informations, keep up the good work.

    ReplyDelete
  3. Nice blog. I got a lot of great data. I've been keeping an eye on this technology for awhile. It's intriguing how it keeps changing, yet some of the core factors remain the same. Have you seen much change since Google made their most recent acquisition in the arena?

    ReplyDelete
  4. Excellent post very useful article. We will get back to this site later to read some of other blogposts. Appreciate it!

    ReplyDelete
  5. Tons of Great information in your blogpost, I favorited your blog so I can visit again in the future, All the Best

    ReplyDelete
  6. Hey there! I just wanted to ask if you ever have any problems with hackers? My last blog (wordpress) was hacked and I ended up losing many months of hard work due to no data backup. Do you have any methods to stop hackers?

    ReplyDelete