Teaching, learning…

So as the gods would have it, I’ve now become a textbook editor and sometimes-author. I’ve been contracted with a large publisher (name withheld) to update their Texas high-school geometry text to national, common-core standards. And I’m having a lot of fun with it!

But I’ve learned a few things. First among them being that books written by committee are rarely good. And usually, barely passable. Yep, of course I’m doing all that I can to change this, but there’s only so much of it that I’m allowed to write. Here are the main reasons that such books go astray:

  • They try to plug companion software and websites to the point of being reliant on them
  • They introduce concepts in strange order, since different chapters are written by different groups
  • The introduction of keywords and definitions is likewise inconsistent
  • Examples and problems are clearly recycled from older texts (how many people can relate to plowing a triangular field?)

As I said, I’m doing my part. But I find myself wishing that I could do more, just for the sake of making the kids’ experience better.

You wanna do what?!

So I have this fun idea. It’s something that can be done for about $10K or so, but I’m having a hard time with one particular aspect of it. Allow me to explain…

Imagine that you have two velocipedes (yes, they have to be velocipedes for… reasons) and you mount them side-by-side and about three feet apart with tubing. In between, you hang a lightweight, but comfortable chair. Perhaps something like a lawn chair. Using the same tubing, you mount four electric motors around the outside in a quadrocopter arrangement, complete with propellers. Electric motors are becoming quite efficient, and you can find some on the order of one HP per pound at reasonable prices.

So far, you have a person-sized, velocipede, steampunk quadrocopter. Which is great, but would be way too heavy to actually lift off. Which is why you need a 30′ helium balloon. This would be attached to the rest via the same tubing and a kevlar fiber net over the top. Internal to the balloon is an electric compressor such that the balloon can be dynamically deflated and inflated. So it can provide just enough lift that the quad motors can lift it the rest of the way. But since they’ll be relying in part on ground-effect, the system is tuned such that you can only get about 10′ high.

I have it all laid out in my head, and trust me, it’s awesome! But now for the hard part. How much trouble would I get in to for this? Technically, it’s a “manned, un-tethered, gas balloon” according to their regulations. But since the balloon is not providing the lift (just weight-offset), it’s also technically an ultralight. But since it relies on ground-effect, it’s also a hovercraft and outside of the FAA’s purview.

So my guess is that the FAA won’t be able to decide between laughing at me and having me shot. Any thoughts?

The model A versus the model S

Modern aircraft are rather simpler to operate than in days gone by. Flat-panel displays and GPS navigation have largely replaced the many dozens of “steam-gauges” that the pilot has to watch. But we’re still a long way away from a truly easy-to-fly airplane. And there are a few good reasons for that:

  • If something goes wrong, you can’t just pull over.
  • For historical reasons, modern displays often mimic older ones.
  • For regulatory reasons, certain instruments and controls are required.
  • Navigating in three dimensions isn’t a natural human function (we’re used to two).
  • We don’t really trust autopilots yet.

None of these are great reasons (just good ones), and I believe that aircraft controls will become much simpler in the future. By way of analogy, let’s look at the start-up sequences for a Ford Model A versus a Tesla Model S. First the Ford:

  • Check the tires (flats were not at all uncommon)
  • Check the radiator level
  • Check the fuel level
  • Get in the car and sit down
  • Turn on the cut-off switch
  • Set the gas mixture to between 3/4 and 1
  • Make sure the parking brake is pulled on (toward you)
  • Turn on the gas
  • Set the spark advance lever to “full retard”
  • Pull the throttle lever to about 1/3 down
  • Turn the carb adjusting knob all the way to the right
  • Turn the carb adjusting knob back one full turn to the left
  • Put the gear shift in to neutral
  • Turn the key
  • Pull back on the choke
  • Press the floor starter button
  • After it turns over once, release the choke
  • After the engine turns over, set the spark advance to about 1/2
  • Close the carb adjusting rod to about 1/4 turn open
  • Set the mixture to between 1/2 and 1/4

Now for starting up your Tesla Model S:

  • Get in the car and sit down

Both procedures get you to the same place in your respective vehicles: engine on and ready to go. Modern aircraft are a little bit better than their 1920’s counterparts, but really not by much. And while we don’t have to worry about the engine startup sequence so much, we do have a lot of new things to do before taking off: setting transponders, navigation systems, electronic flight plans, etc.

So we are not up to the Tesla Model S in terms of usability. In my opinion (having worked on quite a lot of different aircraft, and flown a few of them) it doesn’t need to be this way. Sure, airplanes are inherently more complex than cars: a car has two degrees of freedom (forward-backward, left-right) while an airplane has up to six (up-down, forward-backward, left-right, pitch, yaw, roll). But that still doesn’t account for a lot of the overhead that is absolutely screaming for automation.

I think that in the future (and I mean that in a vague and nebulous sense), planes will do most of the thinking for us as far as navigation, take-off, and landing. The flat-panel displays should be alerting us to potential issues, only, rather than faithfully recreating the steam gauges of the past.

End of rant.

Hacking aircraft for fun and profit

Modern commercial jets make use of AFDX networks for sending and receiving control and sensor data. The AFDX protocol is based on Ethernet, and (if you’re familiar with the OSI model) is identical up to layer 2. This means two things. First, that AFDX traffic can be (mostly) routed by standard Ethernet hardware. And second, that Ethernet software tools can (sometimes) be used to troubleshoot and hack AFDX networks.

The problem is that such tools are not designed to handle a number of the things that AFDX does. AFDX is deterministic, redundant, and more fault-tolerant than standard Ethernet. And so you generally need specialized hardware and software to interface with AFDX.

But it doesn’t have to be that way. A laptop’s Ethernet port should be able to read and write AFDX traffic just fine. The only reason that it cannot is that it doesn’t understand the upper level protocols. There have been a few projects to rectify this, and they have made use of the WinPcap libraries for low-level traffic reads and writes. And then they stopped there, because those involved were happy to leave it at the C-code level and lock it away behind corporate-secrecy firewalls.

I was somewhat less than happy with this, and so I’ve written a suite of LabVIEW libraries that can hijack a PC’s Ethernet port [note to the NSA: when I say “hijack”, I’m talking about taking control of an Ethernet port, not an airplane] and read, write, and otherwise manipulate AFDX traffic. If I get clearance to do so from my client, I’ll open source these libraries. And maybe write an article on it. I’m really hoping that I can share this with the world in some way because it’s a really neat thing and fills an as-of-yet-unfilled niche.

Stay tuned for details!

A delicate balancing act…

There is so much that I do that I would want to write about. Much of the work that I do would make for some fantastic conference or journal articles. And some it would’ve even made a great master’s or doctorate’s thesis. BUT… the reality of the situation is that I am almost constantly under some non-disclosure agreement or other. Not that the work I do is terribly secretive. There’s no national security issue (usually) and no chance of any disclosure actually hurting whatever company I’m working for.

But the knee-jerk reaction nowadays is to hide everything that everyone does, all the time. Just in case. As though my obscure bit of network queuing code would sink the company were it ever revealed. From the standpoint of furthering the art, this is not a wise policy. From the standpoint of furthering my career, it’s damned annoying.

As always, XKCD said it best…

https://xkcd.com/664/

Failure is not must always be an option…

I am a scientist (if you know me at all, you’re saying “duh” right about now) but I am not a science cheerleader. By this I mean that I do not try to uphold the ivory tower at all costs. Primarily because, if we start to do this, then we are no longer doing science. That said, let me shed some light on a glaring problem with the way that science is done nowadays.

Most institutions are “publish or perish” in fact if not outright stated. This means that, as a working scientist, you are regularly expected to publish your results. This part, I’m actually okay with, in principle at least. Putting things in to the public domain is a good thing. But now for the two not-so-good things (there are more than two, but I’ll only talk about these today).

First, most journals do not put their content in to the public domain. You have to pay (and pay through the nose) in order to see it. This is not conducive to good science. Mind you, there are attempts to mitigate this. There’s the physics pre-print archive covering physics, the public library of open science with bioscience-related content, and most journals now have a free content section. There are even (illegal) torrent sites and aggregators dedicated to swiping content from closed journals and sharing with the world (nope, I won’t provide a link for those). So this is slowly getting a bit better.

Second, and much more importantly, failure is not an option when it comes to publication. With very few exceptions, only successful experiments and proven theorems are accepted for publication. This is so absolutely wrong that it almost defies logic. Science would be far more transparent and progress much more rapidly (and more importantly, honestly) if null results could be published. Again, this is slowing starting to change. Recently there have been attempts to rectify this to a degree. the Journal of Negative Results is one such attempt, though it limits itself to the biosciences.

Clearly these two factors are a huge hindrance to the reasonable progression of scientific research. I myself have been stymied in the past, needing to see a particular set of results, but being unwilling or unable to pay the exorbitant journal access fees. Additionally, I could have been save a lot of trouble had null results been published. But that’s now how scientific publishing works. And so I (and countless others) have wasted a significant amount of time following paths that could have easily been avoided, if only access were more open and honest failures held in equal esteem to successes.

I’ll end it here, though I’ll pick this up again shortly. And if you’d like to read more, here’s a better written article:

Unpublished Results Hide the Decline Effect

For your amusement…

A Wrinkle in Time is so much more amusing when you mentally replace “IT” (the name of the big bad monster thing) with “I.T.” (as in “the I.T. department”). Try for yourself:

“Calvin’s voice again. ‘Anyhow you got her away from IT. You got us both away and we couldn’t have gone on holding out. IT’s so much more powerful and strong than—How did we stay out, sir? How did we manage as long as we did?’

“Her father: ‘Because IT’s completely unused to being refused. That’s the only reason I could keep from being absorbed, too. No mind has tried to hold out against IT for so many thousands of centuries that certain centers have become soft and atrophied through lack of use. If you hadn’t come to me when you did I’m not sure how much longer I would have lasted. I was on the point of giving in.'”

What I’m up to (part whatever)…

So much to do, so little time.  But it’s all good, so I’m not feeling overwhelmed.  Just the right amount of whelm, I suppose.  Anyways, on tap for this week is paper writing (due tomorrow!), art project materials gathering (the name of the project is “Your own, personal Jesus” and I’m still keeping the rest a secret), Arduino development (also a secret), a big LabVIEW project that is to serve as a proof-of-concept for future work, some Android programming (yep, also secret), a new web site (secret), and journal article reviews.

In and around all of this is some financial/business crap that needs taking care of.  That one seems to be never-ending, probably because it is actually never-ending.  Someday, I’ll be making enough to hire an business manger to foist all of that on to.  Until then, I just have to deal.

So yeah, a lot of secret stuff still happening.  At least I’m dropping a hint for the art project.  It’s going to be a busy week!

Science v. Art — the final word

I’ve had pretty much enough of two aspects of the science v. art arguments. The first argument is that they have been, are now, and forever shall be, at odds with each other. Bullshit. Those who make such arguments tend to have no knowledge of either science or art. I am a scientist who dabbles in a variety of artistic endeavors. My girlfriend and my best friend are both artists who are very scientifically-minded. There are no differences in our philosophical outlooks. More on this in a moment.

The next common aspect of the argument is that science and art need each other: science to improve the quality of art, and art to enable visualization of science. Well, yeah, maybe. But that misses the point. At least those who put forth that argument are not perpetuating some mythical war between the two.

Here’s how it really is, folks: They are the very same thing!

We are puny humans with very small minds and a very limited capacity to describe and define the universe. Reality around us is so much grander than we can ever know, let alone describe. To paraphrase Oliver Sacks, not only do we not live in reality, we’ve never even visited the place. And so, in an attempt to capture its beauty, we create metaphor.

Science does so by using a variety of descriptive languages (various mathematical systems, and words as precisely defined as the language allows). But science goes in knowing full well that all of these constructs are nothing more than metaphor for something that may never be fully understood, except in limited context.

Art does so by using a variety of descriptive languages (visual symbols, forms, musical notes, and words as the language allows). But art goes in knowing full well that all of these constructs are nothing more than metaphor for something that may never fully captured, except in limited aspect.

Both rely on the same tools and insights and reasoning; indeed, the very same parts of the soul. Because in all cases, the sciartist is attempting to express an aspect of the universe that they see, in order to better understand it, and maybe even present it to a wider audience.

So enough of the arguments. Science is art. Art is science. Both are nothing but metaphor for the vast, the sublime, the beautiful, and the unknowable. End of rant.

Beautiful Failure

I fail at things.  A lot.  Almost everything, really.  And the only reason that I have actually succeeded at the few things that I have is because I’m either too stubborn or too stupid to know when to quit.  I suspect a little of both.

I’m working on a side-project (yes, another one) that is a sort of combined art-science-interaction piece to celebrate all of the ways in which we fail.  I’ll probably be posting a link to this in the next couple of months or so.  Stay tuned.