A couple of days ago I talked with a friend about computers in our
primary schools. At her school they had a set of Microbee computers, a
family of Australian Z80-based micros running CP/M. I don't know
exactly what model they had, but after asking around among other
friends some sort of Microbees seems to have been rather common in
Swedish school in the mid-80s.
I remember seeing ads for Microbees in the 1980s and much later had an
opportunity to play with a 32IC for a while. Quite nice little machines.
From 1985/86 my school had Compis computers, which I think was
slightly more common, Compis being The Official Swedish School
Computer. Compis was a 80186-based CP/M-86 machine with really nice,
low keyboard and fairly high resolution monitors.
The system at my school were 640x400 monochrome (green) and a single
colour-based machine with much slower graphics that everyone avoided.
Allegedly there was also a version with 1280x800 monochrome (b/w), but
I never saw anything like that.
The machines had floppy drives but were also connected to a shared 10
MiB hard disk. The hard-drive was only available in read-only form,
except for one drive letter which only could have one user at a time.
I remember a program called boxloss which unmounted this virtual
drive from anyone who happened to be using it and mounting it for you.
The password was “Fredrika”. I don't remember how you were supposed to
mount it without stealing it like this. Anyone?
There was some kind of menu system, but if you really wanted to you
could get at the CP/M prompt. This prompt, unfortunately, was the
standard CCP, not any of the fancy ZCPR stuff that people running CP/M
on Z80s were used to by then.
One thing that struck me during our conversation, and from many other
conversations about computers in Swedish primary schools in the 1980s,
was that the computers were very seldom used for anything! For
instance, we were not allowed to use the computers to write texts!
There were no word processor or text editor available, at least to my
knowledge, and we were certainly not allowed to use the computers
outside of specific “computer classes”.
This is particularly interesting if you look at the Microbee, since as
far as I can tell all models had a built-in text editor!
We weren't allowed to use any advanced development tools either. On
the Compis we had to write programs in COMAL, a language looking a lot
like BASIC, but with proper procedures and functions. Really
frustrating when I had the wonderful Turbo Pascal at home and I knew
that Turbo Pascal was available on CP/M and for the Compis as well.
After our conversation I decided to look back at some development
environments on CP/M and see if I could have lived with the
environments back then...
YAZE-AG comes pre-loaded with CP/M 3.1 and a lot of development tools.
I was totally blown away by Turbo Modula-2! Look at drive M:.
Wikipedia tells me TM-2 was never marked by Borland but later became
TopSpeed Modula-2 for MS-DOS. I had never used it before but it's
really incredible. The environment is reminiscent of Turbo Pascal 3.0
with a small menu system and an WordStar-like editor, but the language
is much richer.
According to this
the cost of TM-2 was $69.95. Tremendous value, indeed!
I would have been very happy indeed if this had been available at my
school. I don't think TM-2 was available for CP/M-86, though, although
it seems Logitech's Modula-2 compiler was. I think I would have been
quite happy with just TP 3.0 as well, which was what I was programming
in at home, but more important than this would have been access to
I'm setting up a couple of FreeBSD jails in an IPv6-only world. I was
a bit surprised to note that although pkg.freebsd.org has a AAAA in
DNS it's impossible to reach the DNS server that gives the AAAA answer
Only if I use a resolver that are both on IPv6 and legacy IP will I be
able to install packages.
There might be an entity (Ahem... NSA) that records all your
communication. In 10 or 15 years you might be a Person of Interest.
Then they break your RSA with their new quantum computer and read all
your old correspondence.
Initial recommendations of long-term secure post-quantum systems
symmetric - at least AES-256, Salsa20 with 256 bits.
symmetric authentication, GSM, Poly1305.
Public-key encryption, McEliece with binary goppa codes.
Public-key signatures, hash-based signatures.
Lamport suggested one-time signatures in 79, Merkle extends to more
a signature scheme for empty messages: key generation.
a message that signs a panic button. a panic message.
For instance, use SHA3-256...
Can only send a single message. The message reveals the private key.
And its hash is the public key.
A signature scheme for 1 bit messages: two private keys.
For 4 bits: 4 key pairs! Concatenate the public keys to create a new
public key for the message.
Don't use the same secret key to sign two messages. Obviously, since
we're revealing the private key when sending.
Lamports 1-time signature messages.
Merkle's 8-time signature system. Make 8 Lamport keys! And concate
them and hash them. One hash is a public key, but many private keys.
need secure hash function, nothing else. for instance SHA-3.
small public key.
security well understood
proposed as an IETF standard.
stateful. You need to remember what private keys have been used.
Imagine what happens if we restore from backup.
Solution to statefulness: eliminate the state! Build a big tree of
signature inclues cert chain
each ca is a hash of master secret and tree position. deterministic so
don't need to store results
random bottom-level ca signs message. many bottom-level cas, so
one-time signature is safe.
0.6 MiB for a signature: Goldreich signature with good 1 time
Builds a model of the firewall, the filtering behavior. Only 11 filter
behaviours defined in the complete semantics.
This is a mathematical specification - not an implementation. It
describes what a firewall does. The specification for CallReturn, for
instance, didn't use a call stack but had a description of calls to
new iptables call chains.
Why do this?
You can use it to prove “semantics-preserving simplification”, for
instance prove that the firewall filtering behaviour stays the same
after optimization or after having removed logging.
Specifications is done in Isabell, then exported to executable code. A
selection of a languages: Haskell, for instance.
This talk was at the same time as the New memory corruption attacks so
I saw the recording.
Cashier station to payment terminal: ZVT or OPI protocol.
Payment terminal to payment processor over Internet: Poseidon
protocol/ISO 8583. Many countries have their own dialect of ISO
Payment terminal: reads the card's mag stripe or PIN/chip.
ZVT in the shop over a network connection. Very old installations use
a serial port and the protocol is designed for this, which shows.
Attack 1: Against the customer
ARP spoofing to get a man in the middle on the local network. Inside
the shop or, if using wifi, even outside.
We send only a command “read the card”, then we have the mag stripe!
Then we go ahead with the transaction.
But how do we get the PIN?
ZVT payment terminals are supposed to work with an HSM: the PIN
shouldn't be seen outside.
ZVT has a function: “Display a text, then enter a number” but this
function must be signed. Can we sign it ourselves?
To find a valid MAC... The HSM leaks the correct MAC through a timing
side channel! They found an active JTAG on the main CPU and could talk
to the HSM. Brute force of the first byte and count the response time.
It's compared byte by byte in the HSM! Returns immediately if it
Attack 2: Against the merchant
ZVT allows local terminal hijacking.
ARP spoofing to become MITM like before.
Reset the terminal ID to your own terminal ID. The attacker needs to
be a merchant as well and have a terminal ID.
There is no authentication between cashier and pay terminal.
Attack 3: Attack on Poseidon
Poseidon is a dialect of global payment standard ISO 8583. De facto
standard in Germany.
Authentication uses pre-shared keys. However the key is the same for
many, many terminals! This leaves just a user name: the terminal id.
Which is public information and printed on receipts!
For the attack to succeed:
Buy a payment terminal.
Service management password. Just google for it! Or brute-force
over ZVT, or read it out with JTAG...
Your victim's Terminal ID. It's on the receipt! Or just guess. They
are assigned incrementally, so if you know one good Terminal ID you
know many others.
Port number on the backend which your victim uses.
Hundred different alternatives? Try them all!
Now you can, for example, print pre-paid call codes for your phone.
Another possibility is to “refund”, initiate a money transfer with a
negative value! From any bank account! Completely independent from any
Attack 4: Hardware
Buy a payment terminal on Ebay. It uses an internal HSM to protect
The HSM is a battery-backed SRAM under plastic cover. When a metal
mesh in this plastic cover is breached the secrets are erased.
But the mesh is only connected in the corners. Can we get something
under it? Yes! Inject a hypodermic needle and ground it. Ta-da! Then
simply remove the lid.
There was an active JTAG port!
What can be done?
Individual keys! Different keys for all payment terminals. The good
news is that the Poseidon protocol can be used to distribute new
Switch off functionality that you don't need.
Refund is activated by default. Turn off!
Detect suspicious behaviour.
Create better protocols.
The main ZVT alternative is OPI, Open Payment Initiative. OPI still
lacks authentication and encryption even though it's from 2003!
Poseidon's family: ISO 8583. Dialects used everywhere. The bad
system-wide symmetric keys are not mandatory in the protocol and key
distribution through Poseidon possible.
Data Execution Prevention, DEP. R-X on text segment. No new code can
be injected, an attacker will have to use ROP or similar.
ASLR - Address space layout randomization, harder to use ROP
especially on 64 bit architectures, since gadget addresses are not
Stack canaries. Place strategic values on the stack that shouldn't
Safe exception handlers.
ASLR and DEP only effective in combination.
Enforce dynamic restriction on return instructions.
One example: protect return instructions through a shadow stack.
Enforce on hardware label that we only can return to the calling
Control-flow integrity, CFI
Never leave the control-flow graph.
Find set of allowed targets for each location in compile time.
Online set check: check if we're allowed this target.
New attacks: control-flow bending
Bend a valid graph... Not calling new code, but allowed targets.
Circumvent control-flow integrety.
CFI'S limitation: statelessness.
Each state is verified without context. Unaware of constraints between
Weak CFI is already broken. Lots of papers about it.
Microsoft's control-flow guard is an instance of a weak CFI.
Precise control-flow graph: no over-approximation.
Stack integrity through a shadow stack.
Fully-precise static CFI: a transfer is only allowed if some benign
execution uses it.
How secure is CFI? With and without stack integrity?
Analysis found a few functions, like memcpy() and malloc() that
everyone uses. If everyone use them you can return to a lot of
functions and build ROP chains easily.
Result: CFI without stack integrity is broken.
When we add stack integrity to the mix we can greatly increase the
protection of the current systems, then:
ROP no longer an option
attacks becomes harder
However: an interpreter will still be vulnerable!
And there's lots of interpreters out there! An example is printf()!
printf() format strings include: memory reads %s, memory writes
%n, conditionals, %.#d, therefore: a Turing complete Brainfuck
interpreter. In printf formats!
Marie lives with a pacemaker: “A project to break my own heart.”
A rather low-key presentation with no alarming remote cracking of
pacemakers, but interesting insights into medical implants. For
instance, Marie's own pacemaker has two wireless interfaces: one
near-field and another for remote monitoring/telemetry when used with
an optional a base station in her home.
They bought a programmer for the pacemaker and some base stations on
Ebay and did experiments. They could extend the programmable range to
Patient privacy issues.
Device malfunction - software bugs.
Death threats and extortion.
(Remote murder). More unlikely than the others, but this is what
would get the press.
Some research needed:
Open source medical devices!
Medical device crypto. Any backdoors in crypto will end up here!
Personal area network monitoring
“We need to be able to verify the software that control our lives.”
“You to can do this research! Lots of low hanging fruit.”
There is na excemption to the Digital Millennium Copyright Act (DMCA)
for research on medical devices and automotive devices. Reverse
engineering possible without infringing copyright law.
The programmers are unique for the device. Some standardization,
Much smaller pacemakers coming. Inside the heart!
“Are there laws requiring the doctor to tell them they are upgrading
No third party tests? No consumer laboratory?
How the Great Firewall discovers hidden circumvention servers
An analysis of how China's Great Firewall blocks traffic and knows what
“We know what is blocked, how it is blocked, where it is
“Most measurements are one-off, continuous measurements challaneging.”
Even if you use DNS resolver outside of China, a DPI looks at your
traffic and spoofs the results. If you wait for a while... you get the
real DNS reply as well! Not filtered!
Many keywords are blocked: DPI looks at your GET request, sends RST
to your browser before the TCP connection is fully established.
HTTPS helps, but see above about DNS.
Encrypted tunnels help - but DPI looks at port numbers, type of encryption,
handshake parameters, flow info.
Active probing: Checks the cipher list in the TLS client hello from
client in China to server in Germany. Vanilla Tor (not using any
obfuscation) looks special. Is it Tor? The GFW starts a short-lived
probe and tries to speak the Tor protocol to the server. If it is,
it blocks client to access this server.
Several data sets collected to see where the probes come from.
Shadow data set: they did a test with clients in cernet, unicom in
china and Tor bridges using vanilla tor, obfs 3, and obfs 4.
The Sybil data set, clients in china with 600 ports on a tor server.
Log dataset: web server with logs dating back to 2010.
Collected 16 000 unique probe ip addresses! 95% addresses seen only
Reverse DNS with “adsl” in them... Looks like ISP addresses. The
single IP address 188.8.131.52 was almost 50% of the probes!?
Majority of probes comes from three ASes: 4837, 4134, 17622.
Do they hijack these addresses for GFW use? While the probe is active
no communication with them possible: traceroute times out, no ping...
What do they have in common?
Narrow ttl distribution.
Uses source ports in the entire 16 bit port range!
Exhibit patterns in TCP TSval.
Does not seem like off-the-shelf TCP/IP stack. A user space stack?
Strange tcp's initial sequence numbers... not really random... zig
zag pattern. Perfectly correlated with time. State leakage!
Probes all share an auncommon TLS client hello.
No random-generated SNI.
A unique cipher list.
State leakage shows that probes are centrally controlled.
Not clear how they control probes.
A proxy network?
Off-path device in ISP'S data centre? Machines connected to switch
Blocking is reliable but fails predictable
In 2012 probes were batched, perhaps started by cron.
Now real time. Median arrival time only 500 ms.
SSH blocked in 2011 but no longer.
VPN, Openvpn sometimes? Softether.
Tor, vanilla, obfs2 & obfs 3.
Appspot - possibly because they're looking for GoAgent proxies.
The probes doesn't seem to be using reference software. Handcrafted!?
The probes look very different from ordinary software and can be
Unix doesn't stimulate you to run software securely. A web service
only needs a specific set of capabilites but can do everything.
Access controls like apparmor not a real solution. Puts the burden on
the package maintainers.
Untrusted third-party programs are also extremely unsafe. Can the OS
provide better isolation?
Programs starts as an ordinary process, opens all files it needs, then
calls cap_enter(), process can still use file descriptors, read(),
write() but can't use open() et cetera. Returns ENOTCAPABLE.
Used in FreeBSD by some programs: dhclient, hastd, ping, sshd,
tcpdump, et cetera.
However, problems: code isn't designed to have system calls disabled!
C library: locales unusable, incorrect timezone, et c
crypto library: non-random PRNG!
capsicum doesn't scale: using in-house maintained code it works (Chrome)
but using off-the-shelf libraries becomes harder.
A new posix-like runtime environment.
No more state. Capsicum is always turned on.
Capsicum-conflicting APIs have been removed.
Bugs becomes compiler errors.
Global namespaces are entirely absent.
Processes can no longer hardcode paths and identifiers.
Resources cannot be acquired out of the blue.
allocate memory, create pipes, socket pairs.
cannot: open paths on disk.
cannot create network connections.
Additional rights: file descriptors
Make sure you have the right set of file descriptors when starting the
A file descriptor is its own chroot.
Process descriptors: replacement for wait()/kill(). Special fork
gives you a process descriptor. These descriptors can't be passed over
File descriptors have permission bitmasks allowing fine-grained
limiting of actions performed on them.
Example file descriptors for a web server:
A listening network socket used to receive incoming HTTP requests,
One or more file descriptors to directories containing resources
accessible through the web,
One or more network sockets connected to backend services used by
the web server (e.g., database servers),
A file descriptor pointing to a log file.
This web server will be limited to the above and can't escape.
POSIX becomes tiny if you remove all interfaces that conflict with
capabilities. Only 58 system calls available.
Goal: add support to existing posix operating systems.
Allows reuse of binaries without recompilation.
Upstreams: freebsd/arm64, /x86-64
Developing for CloudABI
CloudABI ports - a collection of cross compiled libraries and tools.
Builds native packages: Freebsd pkg and Debian packages.
Use cases of CloudABI
Secure hardware appliance.
High-level cluster management.
CloudABI as a service: no virtualization overhead. No need to
maintain entire systems, just applications. Can be written in any