Monday, November 14, 2011

Do Not Believe in Structured Logging

This is based on my first-hand involvement in the audit attempt at structured
logging, and on the results of documenting the audit records that were actually
sent by various programs. It is rather long, I'm afraid—but please read at
least the "issues" section; these lessons were learned the hard way, and any
plan that ignores them will fail.

Outline



  • Issues:

    • Any generic log parsing tool WILL need to handle unstructured text. Always.
    • Any structured logging format WILL be used inconsistently.
    • There WILL NOT be an universal field name namespace.

  • Implications:

    • You can only "trust" data that always comes from the same piece of code.
    • Most log analysis code must specifically know what record it is working with.
    • Structured logging formats can not deliver what they promise.

  • A potential recommendation:

    • Do not define new structured logging formats.
    • Define an universal field syntax. Make it trivial to use.
    • Treat unstructured text as the primary format, and optimize its use.
    • ... but is it really worth it?


Tuesday, January 25, 2011

Notes from 27th Chaos Communication Congress - day 4

Here are some notes from the final day of the 27th Chaos Communication Congress. See also
day 1, day 2, day 3.

PDF


Can have a PDF that displays different output depending on OS/Locale - even
without JavaScript. Can do a lot from JavaScript, Postscript if "signed by
trusted cert". PDF is a container - can contain Flash that is auto-started.

PDF streams: ambiguous syntax for determining input size - can overlap other
data in the file. Document metadata is readable and writable in
JavaScript. Lots of metadata that can be used for storing arbitrary data.
Redefinitions of a PDF object - last one wins, even ignores the xref table.

%PDF header can be anywhere in first 1024 bytes => can make a file that is
both valid PDF and {ZIP, EXE, GIF, HTML}.

Antivirus evasion: Not tested - most AVs didn't find even a very common exploit; if
they did, then only using a generic signature. Various format confusion
methods confuse some AVs.

Lightning talks



  • TV-B-Gone for N900 - N900 has an IR xmitter
  • Monitoring spyware - CERT-Polska monitoring ZeUS (not a virus,
    needs to be dropped by something else).
  • SystemTap: "a code injection framework" - need only 3 lines to sniff IM from libpurple
    http://stapbofh.krunch.be/
  • "Hacking governments" - Talk proof of concept: graphing relationship structure of relevant people?
  • UAVP-NG: UAV ("quadcopter") PCBs + software, GPLv3: http://ng.uavp.ch/
  • Privacy: "transparency is more difficult than having things in the open"
  • http://incubator.apache.org/clerezza/: "Semantic web" linking social
    networks
  • bblfish.net = W3C federated social web XG: "WebID"


Data monitoring in terabit Ethernet


It is easy to monitor bus topology; point-to-point harder - especially duplex
(2x link capacity needed). Observing optical traffic can be done using a
splitter, but must be done for each direction separately. This is easier
with switches - can copy traffic, but combined traffic may be too much for
the analysis port.

"Data Mediation Layer": a device that collects observed traffic from >=1
sites, distributes it to >=1 analysis machines (based on rules): aggregation
(>1 input → 1 output), regeneration (1 input copied to >1 output),
distribution (depending on content), filtering (L2-L4), manipulation ("packet
slicing" = discarding packet content, masking (to hide sensitive data),
timestamping, annotation with input port number). The DML allows can
filtering out most traffic and consolidate it (therefore fewer and weaker
analyzing machines are necessary).

Examining existing filters: stored on device - could perhaps use serial TTY
for access?

Web GUI: firmware updates not automated. Gigamon allows getting the filter
list without authenticating.

DML machines have default accounts.

How the Internet sees you


Autonomous system = a network operated under a single policy: only sees what
passes through (assuming no cooperation between entities, or data collection
by law enforcement). A tap = mirror port, optical splitter, or a function in
the switch.

Surveillance (e.g. law enforcement): can see everything, but have to store /
analyze it all.

A "flow" = set of IP packets passing during a certain time interval = (src
IP, src port, dst, IP, dst port); ~50B/flow, flow ID and data volume is
stored. "Netflow" = export of flow records: lower data rate, but no packet
contents, higher router overhead (could fill up the flow table?).

NetFlow v5: common, need v9 for IPv6. IPFIX ~ NetFlow v9: uses "information
elements" = (field ID, value) pairs.

Storage requirements: large ISP (2M flows/s: 2 PB/s all data, 4 TB/day
netflow).

sFlow: sampling - only e.g. 1/4000 packets => little data, easy overhead
(only need to copy headers, not parse the packet), can miss data.

Handling meaning of IP addresses: By log DNS queries and answers, can
understand virtual hosts without parsing HTTP. Also can reveal otherwise
unannounced domains (completely new domain with millions of users probably
means malware).

Using this, ISPs can do accounting/billing, but can also build an user's
profile - based on (restricted set of) used services, but also by connections
to automatic update servers (a signature visible on IP/DNS level).

Experiment on 27c3: 1/4k packets captured, anonymized IPs => can't do DNS, nothing
stored.

"If you want to be anonymous, be a sheep" so as not to stand out - no
clearing cookies, no Tor... "Do not connect to a known Tor exit - use a
special bridge". IPv6 privacy: enabled by default on Windows, disabled on
Linux - probably does not help anyway.

Open source tools: NFSen, ntop

RC4/WEP key recovery attacks


Goal: automated cryptoanalysis tools.

Overview of WEP: Uses RC4 stream cipher = key stream generator to XOR with
plaintext. WEP can lose packets, so each packet is encrypted independently,
RC4 key = (secret key, 24b counter = IV), IV contained in the packet.

RC4: consists of key schedule and a pseudo-random generator. Key scheduling
has a strong bias towards secret key. PRGA: Only 2 bytes in S swapped per
PRG byte => can find biases of PRGA, allowing to guess at scheduled key from
keystream (then we can guess at keystream bytes).

Looking for more PRGA biases:

  • Starting with 1st keystream byte and all relevant inputs, statistically
    find biases. Too many possibilities still - restrict some values to {-1,
    0, 1} => Found new biases, with strength varying by round and values.
  • To try other than values than {-1,0,1}, used Fourier transform chosen
    such that the key schedule correlation applies, and found more biases.


"Black box approach": Just use first 256 bytes of keystream and key bytes as
a linear equation. This is too large => limit to first L bytes of both key
and output => found new biases again (new because we observe correlations
"caused" by PRGA rounds).

Attacks on WEP - improvements: Can recover sum of key bytes (again through a
bias). => To recover WEP key with P=1/2, need only 9,800 encrypted packets.

Notes from 27th Chaos Communication Congress - day 3

Here are some notes from the third day of the 27th Chaos Communication Congress. See also
day 1, day 2, day 4.

SIP source routing


SIP: derived from HTTP. Typical flow: A→B invite; B→A ringing; B→A OK;
A→B ACK; RTP flow. Can talk through proxies or
directly (typical case: SIP - home router - SIP proxy + RTP relay - PSTN
gateway). Every SIP phone is a server! Names: foo@bar, client registry.
Can have multiple clients with 1 address, call will be forwarded to all.

Principle: stateless core => state is in the message = "source routing" -
describing all proxies on the way.

Authentication: RFC says use HTTP digest - but you can't authenticate
on/through a proxy. This allows faking caller IDs.

Can send data to internal LAN: Contact sip:...@192.168.1.2, or use source
routing (either invites, or "RTP" = arbitrary UDP!)

It might be possible to open a port by sending SIP-like XMLHTTP requests
(make the browser send a request from the client to attacker, confuses the
NAT router).

Countermeasures:

  • Drop SIP that doesn't come from a trusted proxy: IP spoofing still a risk.
  • Ignore IP addresses in SIP: this violates end-to-end principle, we have both huge messages and stateful proxies.
  • TLS: Needs stronger hardware.
  • IPSec: Difficult to deploy. "3GPP IMS" (mobile) uses IPSec to replace some other SIP auth functions.


Console hacking


Wii summary: 9 firmware updates, out of that 1 real feature (the rest
vulnerability fixes). ~30M/73M machines manufactured with vulnerable
bootloaders. ~1M users of "Homebrew Channel". "Pretty much broken from the
beginning".

XBox360: only 2 major hacks, both minor bugs that were fixed.

PS3: Supported Linux => everything worked (except 3D which was
intentionally disabled). Now that Linux was removed, becomes "interesting".

Statistically, if Linux is not available, then it will be hacked so that it
can be used. Most hacks to run Linux, not piracy - but piracy is made
possible as a side effect.

PS3 architecture: PPC + 8 SPUs. An "isolation mode" available - most of
SPU's memory becomes inaccessible to the PPC. In PPC: LV1 = hypervisor, LV2
= GameOS, games in user mode. SPUs are accessible from LV[12]. The boot
process consists of "interleaving" SPU and PPC execution.

PS3 has encrypted storage, but uses the same key and IV for each sector => if
we store known data, we can use the PS3 for decryption.

"Geohot exploit": glitch memory address lines ("really amazing hardware") to
be able to manipulate HV's view of memory allocation.

PSJailbreak (+clones): USB device exploits a kernel bug => LV2 code
execution. The device is an "USB hub + devices": Device 0 contains payload
in USB descriptor. Device 4: because the descriptor size read twice, it can
change between reads, which confuses code and causes buffer overflow,
overwriting vtables. PS3 doesn't have W^X protection on LV2, hypervisor does
not authenticate executable code (unlike Xbox), so LV2 is compromised, HV is
not - but still allows pirating games. Sony fixed this, but downgrade
is possible: USB service mode authentication uses HMAC, the key was leaked.
=> AsbestOS: replace LV2 in memory by Linux - just like OtherOS but 3D is
enabled.

Encrypted ELF format: encrypted metadata key, encryption metadata
authenticated by ECDSA, this authenticates rest of the system. We don't need
the keys, the secondary SPE will happily decrypt the data for us :)

Boot loader revocation mechanism: list of software that should not be booted.
Buffer overflow on revocation list decryption => can run arbitrary code, get
keys ... => boot chain of trust broken.

ECDSA signature on executables: ECDSA depends on randomness and uniqueness of
a random nonce, but Sony reuses a single "random" value => can extract
private key => can sign own executables - LV2, revocation lists, etc.

http://fail0verflow.com/

Analyzing a modern cryptographic RFID system


HID is an US company, making RFID systems. HID Prox (1991): no security. HID
iClass (2002) claims (3)DES security ("one of the first to claim DES").

"Wiegand" - a name all over RFID: Wiegand effect => Wiegand wire => Wiegand
format of access control cards. Wiegand {interface, protocol}: between the
door reader and security panel (=backend).

Wiegand interface: (GND, DATA0, DATA1), very widely used, especially in US.
Legacy control systems have Wiegand interface input, so even new HID readers
have Wiegand interface output. The interface sends simply: (8b facility ID,
16b card ID), so a MITM can replay identifiers.

HID supports other "formats" = data layouts. Advertised as a security
measure: "Cards with proprietary formats are more difficult to fraudulently
obtain." HID will create a customer specific format :) [lock-in].

Card organization: blocks of 8 bytes. 2 application areas per page,
application area = set of blocks. Page layout: serial #, flags, keys (not
readable by app), app data. Can change space allocation for apps within page.

Access control app: 1st app. Access control ID = Wiegand ID. Security
levels:

  • "Standard": 2 keys shared across all HID readers world-wide (any card "accepted" by any reader)
  • "High": Site-specific keys
  • "iCLASS Elite": Like "high", with keys stored at HID.

Customer-generated keys are discouraged - e.g. HID doesn't sell programmers.

A configuration card can change reader's mode to high security by loading
specific keys, therefore "standard" keys are a desirable attack target.

Readers use a "security sealed" connector - with black tape. The Connector
is a PIC in-system-programming connector.

It is possible to circumvent copy protection fuses: can not read data directly,
but can erase a block that stores the copy protection bit and replace it
with a dumper code (to get the boot loader as well, erase program and
replace it with a dumper).

Extracting 3DES keys: 4 random blocks easily visible, by bytewise changes we
can identify the function of the data. The keys are permuted, but
documentation of the permutation exists. Access to protected HID access
control is possible with the extracted keys.

Encryption is ECB without IV => can copy encrypted blocks between cards
even without decrypting them.

Authentication between card and reader sniffed: There is no RNG in card, so
replay attacks are possible. 4-byte authentication of each party is used.
Each write is authenticated using 4 bytes. Otherwise, no message
authentication-only CRCs

Security summary of "standard security": Auth key derived only from card
serial #. Verbatim copy of blocks is possible. No MAC => MITM leads to
privilege escalation: do mutual auth, then fake other communication.

http://www.openpcd.org/HID_iClass-demystified

GSM stack on phone


http://bb.osmocom.org/

GSM is not scrutinized - only 4 closed-source implementations, even handset
manufacturers don't get source. The network side is similar. "Operators are
mainly banks" - outsource network {planning,deployment,servicing}, billing.

To start with handset experiment, we need at least an Xciever; we want the
"air" interface fully SW-controlled.

Baseband CPU: usually an ARM with an RTOS, and a DSP (for radio signals, A5).
No "modern" security features (stack protection, ...). GSM components that
are not generally available can be bought on grey market as surplus. Lazy
approach: take an existing phone - started with TI Calypso.

Implemented L1-L3, limited UI, good PC interface. Starting with L1 on
phone, L2-L3 on PC.

Firmware update done via RS232 over the audio jack connector.

Can do GSM phone calls. Can't do: Neighbor cell measurement (=> handover),
UI, GPRS, data calls.

Can modify cell response timing, this allows faking location of the phone.

Fully integrated Wireshark support available.

Also implemented logging of found cells, can triangulate cell positions;
accuracy of location: 500m.

Security analysis to be done.

Reverse-engineering a real-world RFID payment card


Taipei: "Easy Card" for transportation. Uses MIFARE - this does not in
itself mean the system is broken. Can buy card for cash, reload card, also
take out all cash. Used to be transportation only, now a general payment
system for up to €240.

We can use existing MIFARE holes to recover keys, read contents => observe
effect of transactions: we can see the value is stored on card, some other
changes observed: transaction log is stored on the card (the internal station
code exposed as a parameter on web pages).

Tampering: retroactively increasing cost of something bought = removing
money: everything works, so apparently there's no on-line database.
Decreasing cost of something bought = adding money.

"Building hardware without thinking about upgradability is negligent"

0-sized heap allocations


Microsoft research.

Problem: malloc (untrusted == 0) leads to heap overflow. Similarly malloc
(untrusted + sizeof (header))
. (Linux with PaX returns 0xFFFFF000 on
malloc(0), which is an unmapped page, so writes result in a crash.)

Theorem-prover-based tool used to find cases where allocation may be 0-sized.

Storage analysis methods:

  • Data-flow analysis: in compiler, but hard-coded, may be too conservative, which results in false positives
  • Model checking: formal logic, searching for an invalid state. States
    are mode detailed, don't lose information between paths like data-flow
    analysis. Easily automated, tools exists (SPIN, SLAM, BLAST, SPOT),
    combinatorial explosion is a problem.
  • Theorem proving - looking for a specific proof that "input conditions"
    guarantee a specific condition. Tool: HAVOC based on "Boogie" theorem
    prover, which is available open-source. A plug-in for MS C/C++ compiler
    is available (binary-only).


Could prove 98% of situations are safe => 100 warnings/1M LoC, which is manageable to verify manually. To reduce the
size of code to analyze, we can "pull" assertions (= preconditions) to
higher-level APIs.

Only handling 0-sized allocations because general buffer overflow detection
is "super noisy".

FrozenCache


Want to keep crypto data only in chip cache - there is no real interface to
get data out, CPU reset clears cache (unlike external RAM).

Caches on CPU: L1 (I/D), L2 (combined). Cache control: CR0 - cache disable,
no write through (nowadays not fully supported). MTRRs, PAT control
cacheability as well.

FrozenCache mode: wipe data in RAM, write data to CPU cache, so that it
remains there and is not written out.

Need an L2 emulator to make sure the code+data will actually fit into cache.
Interrupts need to be disabled throughout. The rest of OS is not cached, so
it is really slow, should only activate this when "necessary" = when screen
locked (X the various keys are in memory throughout the OS). Hooked
gnome-screensaver to automatically switch into this mode.

Need to protect: keys, key schedule, intermediate data. We need hooks to
register locations used to store relevant data - but we're never really sure
they are in cache. To verify this, use MSR cache statistics, or use
non-existent addresses.

Messy details: It is necessary to consider cache associativity.
Hyperthreading, multi-core: reserve 1 CPU, bind thread to CPU (therefore we
can let others run at full speed, but there is a risk of cache snooping).
Unknown impact: SMM, HW virtualization, CPU's speculative prefetching.

Will release source code within 3 months.

Guessing cache can be used to store data of perhaps 5% of "raw" cache size.

Notes from 27th Chaos Communication Congress - day 2

Here are some notes from the second day of the 27th Chaos Communication Congress. See also
day 1, day 3, day 4.

Lightning talks - day 2



  • Data Privacy Management: http://www.daprim.de/
  • Starfish: http://www.kemenczy.at/ "Fully distributed, user-controlled
    network": Not a network of star topologies, each node has redundant links;
    no centralized authority.
  • FreedomBox: http://wiki.debian.org/FreedomBox
    An alternative to closed clouds: privacy control, decentralization,
    personal information stored at home.
  • Arduino: "An easy way to get into microcontrollers, for non-geeks"
  • Telecomix DNS: http://gitorious.org/telecomix-dns
    A decentralized DNS server network, an alternative to ICANN: reliability,
    stopping domain seizures.
  • SAP insanity: General complaints about the product: On-screen keyboard
    only. Column mapping GUI between tables only with arrows, with hundreds of
    attributes results in a mishmash of lines.
  • SMS port scan: Set up a SMS interface to a port scanner, running on the CCC GSM
    network.
  • TSAscreening: http://bit.ly/tsarights
    "You are next", "they're the ones being terrorists".
    By law, search can be only only reasonable and necessary; you have right to record on a camera; you have right to bring medical liquids (juice!).
  • NetS-X: http://code.google.com/p/nets-x/
    "A hacking game" = e-learning on net security, sandboxes for playing


I Control Your Code


The idea is to use separate identities/rights for each (user, application)
pair.

The "proposal" is "user-space virtualization" to authorize all syscalls: use
binary translation of all code, prevent:

  • control flow transfers to unexpected code (only to .text areas, only to known functions in inter-module transfers)
  • unexpected returns/indirect jumps (only to valid targets, shadow all stack)
  • jumps into middle of instruction
  • switching between 32-bit and 64-bit code

Therefore, all instructions are guaranteed to be validated, so we can have a
policy authorizing system calls.

Also can be used to track "unmodified" execution.

Implementation: http://nebelwelt.net/projects/fastbt/ - 6-8% overhead

Policy strength / comparison with SELinux: Can enforce a list of allowed
syscalls + arguments, has also a learning mode. Can add specific checks
(by writing code).

Comparison with HW virtualization: overhead 3-5%, but it's not possible to
get instruction-level control.

SSL Observatory


http://www.eff.org/observatory

SSL problems: Too many trusted parties; CA bugs allowed e.g. the \0 attack.
We should be afraid of X.509 - too flexible, ugly, history of implementation
vulnerabilities.

EFF SSL observatory: contacted all allocated IPv4 addresses. 16.2M IP addresses listening, 11.3M started SSL, 4.3M used valid certs
(1.5M distinct valid leaf certs)

Found 1482 trustable CAs (incl. intermediate CAs)! - 1167 issuer strings, 651
organizations (~200 are German universities through a single intermediate).
Notable cases: dept. of homeland security, US defense contractors, CNNIC
(China - root CA controversy is really irrelevant), Etisalat (Dubai -
installed malware on customer's hardware), Gemini observatory

~30k servers still have Debian broken keys (out of which ~500 had valid CA
signatures)

There is no general way to contact a CA (e.g. to ask them to revoke a cert).

Weirdness found: certs that both are and aren't CAs, certs for "localhost",
127.0.0.1.

Firefox and IE cache intermediate CAs, so we can't say for sure if a cert is
valid - it may be valid only if the user has "validated" the CA by visiting a
different site where the CA was signed; there are 97K such certificates.

EV certificates: Identified by (CA-specific) OID. Don't work all that well
with browser Same Origin Policy - EV to non-EV references considered "same
origin". Problems found: RFC-1918 addresses, unqualified names, weak keys,
long expirations. 13 issuers violated EV spec by signing a weak key (even a
512-bit key!) Found EV certs with wildcards, also violate policy.

Data is available for download.

Plan for a decentralized SSL observatory, code in progress. Design: use Tor
to send observed raw certs - may get a reply notifying about a MITM; this
only works with a delay, but better than nothing.

DNSSec: no longer sure about it being a good thing, due to recent domain
seizures by US.

High-speed high-security cryptography (DJB)


I'm not summarizing this- go read
the slides! Relevant, inspiring, perhaps not quite practical as presented.

Data recovery techniques


The process goes through all layers of the mechanism: data acquisition
(complete or partially destroyed), disk array composition, iSCSI/NAS data
layout handling (iSCSI disk is a file, optionally fragmented), handling
virtualization, file systems, file formats (e.g. a database), verification of
the result (is the data valid?).

HW lab setup: Clean atmosphere necessary - "most of dust comes from
customer's drive". Stereo/video microscopes used for observing head
alignment. "Buy the best tools" - "no tools are better than bad tools".
Disks needed both for making a 1-1 copy to physical disk, and as spare
parts - there are ~10K drive models on the market; sometimes spare parts
don't match even on the same model # (manufacturer fulfills contract to ship
legacy models by shipping new model with firmware-limited capacity).

Tool examples: flash chip reader, "head lifter" (custom), a tool to extract
the platter out of the bearing, a press to put the platter onto a spindle.

Data acquisition from a disk: spinning the disk - in its shell or a replaced
shell; can't get data without spinning the drive in <1 month => impractical.
Used to use a "spin stand" - today it is almost impossible to align the
platter correctly. As a last resort: magnetic microscope - can handle broken
platters, but reading a single surface takes ~5 years.

Data acquisition from flash: PCB damage: can be fixed, but rare; otherwise
desolder chip, read it, reorder blocks.

Kinds of damage for disks: surface damage; bent spindle (=> stuck); defective
heads (stuck to surface); electronic failures; firmware corruption; media
contamination (e.g. bearing fluid); fire; water. Hard drives are not
sealed/devacuated - there are little holes to even out pressure, so water can
get in. Fire damage is normally not hot enough to damage magnetic data. For
flash it is about the same, except for physical damage.

Drive heads: with >1 head, only 1 activated/pre-amplified at a time. Some
firmware just fails, if other head is on an unexpected position => need to
align heads correctly to each other.

Validating recorded files: replace unread blocks with a pattern (which
includes some metadata) => can check if a recovered file has been damaged,
can focus on priority missing data.

Fun with firmware: There is a separate HW connector - on Seagate serial I/O,
3.3V, only starts talking after Ctrl-Z. HD commands are family-specific.

Best way to kill a drive for sure: "really difficult". In most cases
overwriting data 1x is OK - overwriting multiple times doesn't help. LBA and
physical addresses aren't 1-1, so one never knows if the data was actually
overwritten.

HW firmware implementation: standard CPUs, ARM is popular. Reverse
engineering: the firmware contains several megabytes of code and data, most
firmware is actually loaded from the media!

Backdooring embedded controllers:


Exitsting laptop backdoors: hardware (keyloggers etc.), software (OS), BIOS,
ACPI; Firmware, other devices - will ONLY cover the EC here.

EC = Embedded Controller: 8- or 16-bit MCU, "beefed up 8042 keyboard
controller", "Renases" on ThinkPads. Controls sensors, actuators
(temperature, battery, fans, brightness, LEDs). Handles hotkeys (VGA output,
brightness control) => needs key press data. (MacBook has an USB keyboard =>
different architecture).

Focus: ThinkPad's "Renases": Based on H8S, running when laptop has power
(even when "off"). BIOS and EC can be flashed via LAN! Some laptops have it
enabled by default!

Easy to work with: Commented disassembly for T43 already exists:
ec.gnost.info/ec-18s/ The author of the patch to swap Fn and Ctrl allegedly
never had Lenovo hardware.

The implemented backdoor can record and provide keystroke data - 4kB space
available, 5:1 compression => 20K keystrokes. To get data out, can use ACPI
results, or use a LED wire as an antenna. To send data remotely: can use a
covert timing channel (manipulating keystroke timing).

Defense: Dump EC firmware (reliable - implemented through HW, malicious
firmware can not tamper), then we can verify it. Plan to use
coderpunks.org/ecdumper to see if it is a known version (but what about
trust, i.e. an attacker submitting a hacked version?).

Future plans: Examine other reflashable devices. "Would like to see" vendors
signing firmware, verifying it on boot (TPM enabled probably can't
[currently] detect this). We need "fundamental discussion" about firmware
trust.

Dumping firmware is done via a protocol over ports 60/61. The original
firmware was found in a DOS version of BIOS updates, *.FLZ contains both BIOS
and EC firmware. Tools used: GNU binutils, a checksum
recomputation tool.