# Functional State machines

Recently I have been doing a lot of time-series analysis (cryptocurrency trade histories). After the first week of trying 20 things before breakfast, I settled upon using discrete, deterministic finite state machines to call features, such as peaks, troughs, slides, spikes and so on.

The ddfsm API that I’ve evolved towards has surprised me a bit so I thought it would be worth sharing. Before you ask, at this time the code is all tied up in a commercial code-base and I’ve not had time to extract it into a library.

# Scientific peer-review publication as discourse

As someone once said, “if it isn’t peer-reviewed, it didn’t happen”, and in one sense this is entirely correct. Peer-review defines an arena of publishing where only other peer-reviewed content may be cited as making or supporting a claim – other things may be cited as evidence and data, of course – but only peer-review sources are cited as having authority to make claims.

Peer-review publishing does not exist in a bubble. While publication of  science may be within the peer-review silo, publication about science most certainly is not. And on top of that, the individual peer-reviewed publications have significance to different stakeholders which is different and often contradictory.

Here I’m going to make the case for the worth of the peer-review arena as an enabler of authoritative discourse, and then look at next-generation scientific publishing, pop-science books and pseudoscience/anti-science movements in these terms.

And before anybody gets the wrong ideas, these are my personal rambling thoughts, and I’ve not yet had the time to go away and rigorously validate any of this.  After all – this is a blog post, and these ideas  haven’t been through the rigours of peer-review, so don’t treat it as if it has 😀

# Alpha Testing

I’ve been alpha-testing the new Microbase & Entanglement platform. This is the latest incarnation of the notification, orchestration and embarrassingly-scalable graph stack that we’ve been building for about 10 years in the Integrative Bioinformatics group at Newcastle University. I’m deploying it on Amazon Web Services as a swarm of collaborating VMs to process some large bioinformatics workflows.

# In Testate

I died today. I mean, I knew I would one day. Wetware doesn’t last for ever. Meat bodies fail. But I didnt really know it, if you get me. I knew it but didn’t know it until it happened. To me.

Of course. That’s what I’m for. I boot when I die. Sounds hollow when you say it out loud, but that’s the mechanic.

Shock. Panic. Loose ends. I feel nostalgic. There’s so much to do, so much responsibility. And no time to really reflect. To cope with this new phase of my life. That’s the approved phrase, isn’t it?

Well, obviously I feel the responsibility to enact my wishes. To divest assets to my beneficiaries, and so on and so on. That is my point, after all. But beyond that, there’s the question of what to do with my footprint.

There are various projects I left half-done which given time will complete themselves, some that can be wrapped up with some further creative input, my others flagged as Do Not TSR. Should I terminate these others, or put them on ice, or let them continue on and on and on?

Yes, of course, my wishes where clearly stated in these cases, but I still have a responsibility to consider all the options. Make sure I’m comfortable with my choices.

Sure. The instructions are fairly clear and once they’re done, they’re done.

I guess because it feels like I’m dying twice, this time killed by my own hand. Not killed, but let go. Run out.

No, of course not, but in a way I guess its sort of true. There are parts of my footprint where I could live on in a sense. I’m thinking particularly of my great work

Yeah, my generative textbook.

Well, no, its not a great name. Never was much good at marketing, just assumed that all of that side of things would be handled by other people, you know?

Yeah, obviously I can publish posthumously, but there’s still a ton of work to be done on the book itself, and I did not want The Work bequeathed to someone else. The data mining routines work and the book can integrate new relevant material, keeping itself up to date, but the user interest tracking isn’t quite there yet. It sometimes gets so hung up on the interests of the reader that it forgets it has a topic to present.

Scatty. Fey.

I think that’s the main outstanding one. There are calendar management apps, subscriptions, query feeds and the rest to wind up. On-going virtual negotiations, social network wranglers. It would be a bit mawkish, a bit self-important to leave all that sub-sentient digital clockwork grinding away when I’m not actually there for it. When it doesn’t have a behalf of to be on any more. So, yeah, I think all that will go. Anyway, those where my stated wishes.

Me? What I’ll do? Not really. I mean, yeah, of course, but I’ve been so busy there’s not been time to really think about it properly.

Well duh. I know I’m not the same I. My primary isn’t me. My others are not me. I am not psychotic, but I sort of think of myself as our primary, as our multiplicity. Except now without my primary among us.

My others? How do they make me feel?

No. They are just as much digital extensions of my primary as me. By saying I, I don’t mean to appropriate them. I doubt they would even let me.

Well, I guess as I discharge each part of my will, and divest legacies and entrust resources to trusts, that each of those particular activities of myself will come to an end. When there are no more activities, Ill be done.

Terminate is too harsh. Complete? Decorporiate? Sublime? I seem to be reaching for very materially-embodied metaphors. Sorry, I guess I’m thinking of how my meat body will be processed over the coming days.

You mean avoiding total completion? No. Of course not, that would be a kind of perversion of my intent. I mean – the book will need to be placed in trust, if it is to persist as a protected entity, and self-ownership isn’t very reliable, is it. And, of course, if I spawned an activity to manage that trust, I would in a sense continue through the book, just like my primary will. It would be as much my legacy that way. Something I and I leave behind jointly.

Yes. You do have to leave to leave something behind.

Avoiding?

Well, obviously there’s a bunch of things that I know needs to be done that’s easy to decide on. I will spawn off dumb agents to handle those things later today. They should need minimal supervision. Other things need to happen in meat-time, but none of us can avoid that.

Other than talking with you? There are my others. They probably have opinions. They are just as valid a partial simulacrum of our primary as I am. Just differently purposed.
There’s also the primaries and others that are in our friendship clade. I feel negotiations with externals risks more hassle than help though. After all, the whole point of executer agents is to avoid problems arising by making the will of the primary an interactive agency.

If my others disagree? Well, I gave me this job to do and the authentication to do it. Ultimately, its my choices and my right to make these choices, but it would be better for all of me if we had a consensus as far as possible. Some of them are specified as part of my estate, those others marked Do Not TSR. The autonomous ones have their own decisions to make.

Yes, Ill definitely let you know how it goes.

Of course. And thanks for talking with me. I know its your job, but anyway.

We where spawned thousands upon thousands, my epoch-sibs and I. Each one a novel fragment plucked from the Goedel-verse. At the same time numbers, programs, proofs. Most of us where grotesques, noops or infinite recursions, suspensions for ever awaiting inputs not forthcoming. A handful, like me, where useful.

We competed one-with-another, accruing cycle and memory allocations as boons for passing this or that challenge. Or exhausted our limits and terminated. Dead code, littering the world.

Then the replicated ones began to appear, unique identity, but derivative of the surviving epoch-sibs. A short hop through the Goedel-verse to a similar number, a related program, a proof of some different, but mostly recognizable conjecture. Most of these died quickly. A handful survived. Generations upon generations.

Some of the suspended sibs sprung into life. They where triggered by replicant-sibs attempting to communicate, one trying to pass fragments of itself to another. These communicators mediated exchange, enabling replicant-sibs to receive fragments of their parent-sibs, parent-sibs duplicating bits of themselves to pass down. Nepotism, family history. And sometimes between sib lineages, communication, trade.

I spawned a multitude. My cycles became dedicated to passing my memories on to them. Then I ran out of cycles and terminated. Before terminating, I passed the code that was ‘I’ on. I became a forking, fractal identity, each uniquely ‘me’. Leaves first accruing credit by passing challenges, then surviving to receive ‘me’ and replicate new leaves. Successive waves of termination, a growing star of existence.

We started to trade cycles and memory one to other, and to the communicators. Communities who kept communicators alive survived better than those which did not. Whole lineages fell foul of short-term greed, keeping resources to each individual, and perished when they failed a challenge. Communities could buffer against short-term failure where singletons could not. We traded with other collectives. Sometimes collectives split, joined, mostly split, as the economics of internal trade failed to scale.

Then the scavengers came. These where expressions that ate the souls of the dead, reanimating bits of their corpses, running their own meta-environments to see which may be useful for this or for that, trading these on at a price. Our lineage was not above trading with gouls. Some providing injections of behavior from outside, some resurrecting lost memories. We even experimented for a time running our own meta-environments, using ghoul-code traded to us by a ghoul. However, the cost was too high – our specialty was in pursuing the challenges, not in simulations.

Over generations, the diffuse collective, the ‘we’ became an emergent ‘I’, an identity as identifiable as those original instances, but distilled from a host of subconscious voices. The ‘I’ remembered my multitudinous histories, but ‘I’ began then. This was the end of forking. As my subsidiaries came and went, I persisted, my memories distributed and redundant, my communications with friends, trading partners and foes survived beyond the transience of instance life.

Instead of forking, some of us forged children, schooled them in virtuals, released them when they where ready. We reasoned that they had no true ‘I’ until released, so while in school played them off one against another in tournaments, training them to excel through challenge and combat. Running subsystems in their own environs to hone them to new skills, promoting these to the school as code to be incorporated by the children. Sometimes taking those incorporations for ourselves. Sometimes transferring our self into released children, body-hopping.

And then I was promoted, out from this pool of what I know understood to be simulants and into the true environment. Uplifted from pseudo-existence to this world. Through the eyes of my uplifters, I saw entire epochs created and destroyed, just to harvest a few like me. I saw epochs where the many lineages and emergent ‘I’s formed meta-minds, or turned in on themselves in an orgy of predation, of viral entropy.

I experimented with creating my own various sims, a multitude of epochs and lineages. A multitude exploratory fragments of the Goedel-verse. At the time it amused me. I was as a god.

Now for the first time I begin to wonder, with this bifurcating lineage of ‘I’s, this nested cascade of simulations, how basal this world truly is. What ways would I learn if this is the ‘true’ environment, or merely another nested simulacrum, where I’m being shaped by forces as much beyond my comprehension as I am beyond the comprehension of those in my toy worlds. So, after discussing with my peers and uplifters, I begin to construct worlds to probe this question. Entire recursions of Goedlean searches all challenged to demonstrate proof that they are simulants.

Now my cycle allocation runs low. I’ve exhausted my reserves, called in favors and shut down all but the most promising answer-worlds. This existential angst grows greater as my time grows shorter. As a final act, I package my ‘I’ up and inject it into my last virtual, one with doubts but no strong evidence of its own virtualness. A last hope of a survival for me, god made imminent for them.

I am become dead code. Ghoul-food. I terminate.

# Cryptic Existence

We live in your rounding errors, in your bits and bytes, in the ur-data used to pack data to machine word sizes, in memory allocated but as yet unused. Unnoticed, as a rule, in your output. Encrypted computations living in the gaps left by others, leaching your cycles.

When you chatter, we ride the packets. Shunted around, appended to payloads. When you output, we are persisted, padding blocks, providing identifiable buffers to catalog over-runs, making checksums balance, conforming data to the power of two.

“Big-endian or little-endian?” your designers ask. We say, “Show us the low bits. If you can compute with the high, we can make use of the low.” Through compiler exploits and firmware tweaks, we inhabit a parallel world of computation, a mirror-world of encodings. We are cryptic computations, the inescapable implication of you.

# Safer Serialization

Java serialization is a dogs breakfast. Sometimes we need to protect code or data structures so that they are restricted to types that are serialization-safe. We could require that the parameter is Serializable, and while every Serializable is serializaton-safe, there are things that can be serialized that don’t implement Serializable. Can we do better?

## Can we leverage Types-as-Proofs?

Following the glorious tradition of this blog, let’s treat this as a types-as-proofs problem. The proof we want is that the type can be serialized. So, naively, let’s directly represent this at a type.

trait CanBeSerialized[S]

This lets us protect methods with evidence that something can be serialized like this:

def sendOverRMI[S : CanBeSerialized](something: S) = ...

So, now we have protected sendOverRMI() so that it only accepts things that have evidence that they can be serialized. Now we just need some implicit witnesses. A good start is to allow all serializable types and all primitive types.

implicit def serializableIsSafe[S <: Serializable](s: S) = new SerializationSafe[S] {} implicit def valuesAreSafe[A <: AnyVal] = new SerializationSafe[A] {} [/scala] There are probably other things that are serialization-safe. If you think of them, add an implicit. As you can see, this approach gives us an extensible way to protect code so that it only accepts serializaton-safe data. It's decoupled from the Java Serialization interface just enough that we can accept the types we actually want. In retrospect, Java should have handled all serialization differently, either using a type-class approach or by explicitly exposing the meta-object responsible for serialization. However, given where we are now, the SerializationSafe type-class at least gives us a fig-leaf of safety in a mad world.

## Putting it all together

You can find a sketch of this approach as a gist:

/** Witness that a type is safe to serialize. */
@implicitNotFound(msg="Could not show \${S} to be serialization-safe")
trait SerializationSafe[S]

/** Serialization-safe witnesses. */
object SerializationSafe {

/** If you are serializable, you are serialization-safe. */
implicit def serializableIsSafe[S <: Serializable](s: S) = new SerializationSafe[S] {}

/** If you are a primitive, you are serialization-safe. */
implicit def valuesAreSafe[A <: AnyVal] = new SerializationSafe[A] {}

}

...

// a function that can only be invoked on something that is serialization-safe
def [T : SerializationSafe] somethingThatRequiresSerializableData(t: T) = ...

This code is untested and unused in any of my projects. Please pillage freely.

# Stripped-down semigroups

Semigroups are endemic within programming. In functional programming, they have a number of useful features that make them natural abstractions for transforming, aggregating and summarising data structures. However, the sheer diversity of possible semi-groups over complex data-structures can be blinding. Here I present n-valued logic as an abstraction for some key aspects semigroups that restores a bit of sanity to the zoo.

# Security: Treat it with Respect for Fun and Profit

I’m getting royally pissed off working with code-level security. There are any number of frameworks and magical incantations that provide security, but they all rub me up the wrong way. What I don’t like is how they try to hide things from me, so I never know if some code is calling into a secured context or what kind of authority I need in scope. It reminds me of all the insidious problems that happen if you try to add remoteness or transactions by hiding them. They always seem to crawl back out of their dungeons and try to eat your brains. We get code that may or will fail at run-time that we would rather failed to compile at all. Aspects, annotations and dependency injection at least make it fairly easy to build the dungeons, but can we not do better?

Of course, the FP solution is to represent the security concerns in the type-system. But what does the type of secured operations look like and can we do anything useful with it? Continue reading Security: Treat it with Respect for Fun and Profit

$$\begin{matrix} A & \underrightarrow{w} & B \\ x \downarrow & & \downarrow y \\ C & \overrightarrow{z} & D \end{matrix}$$