I was looking at different telescope models tonight, so I thought I’d put together a collection of sample images for comparison.  I also wanted to see how consumer telescopes stacked up against actual observatories, and even the Hubble space telescope. Here’s a visual comparison of how well different types of telescopes can photograph Saturn (746 million miles away from Earth).

Entry-level consumer telescope (Celestron AstroMaster 114 EQ, $178)

Image credit

Mid-level consumer telescope (Celestron NexStar 8 SE 8″ lens, $1249)

Image credit

High-level consumer telescope (Celestron CGE Pro 1400 14″ Lens, $9999)

Image Credit

Small Observatory (Pic du Midi Observatory 1 meter lens, Unknown cost)

Image Credit

Space Telescope (Hubble, $1.2 billion)

Image Credit

BONUS: Space Probe (Cassini–Huygens, $3.2 billion)

Hexagonal Cloud pattern on top of Saturn
Image credit

Java web development with Spring

imagesA few months ago, if someone mentioned writing Java applications for the web, I would have had horrible flashbacks to kitschy Java applets from the late ’90s web that would take forever to load.

But for the last two months, I’ve been helping write a web application based entirely in Java. We chose Java because developers on staff were already trained in the area of Java Server Pages. I had only really worked on web apps in PHP before, but I’m starting to favor Java style web development for a few reasons:

  • Early on we chose to go with a popular Java framework called Spring.  Spring maintains a sub-project called Spring MVC, which uses the popular Model-View-Controller pattern to separate architecture components. Spring also relies on other great design paradigms, like Dependency Injection (DI), where class dependencies are “injected” at run-time, making components more modular.  You can even add DI using Java annotations with the @Autowire keyword.  Here’s an example of a Spring controller that takes an item id and uses a dependency-injected MyService to fetch and add the object to the JSP view:
  • Java development is easy in Eclipse, and Spring offers their own Eclipse-based IDE with Maven built-in.  Maven is a build automation tool that greatly simplifies project dependencies and external libraries.  Need to add log4j logging? Just copy and paste the Maven dependency to your pom.xml file.  No copying JARs around!
  • You can debug your web application using Eclipse, which is much easier than debugging PHP code without an IDE.
  • Maintaining Java classes is much easier than maintaining PHP code interwoven in HTML files.
  • Java’s object oriented nature seems more natural than PHP.  Java objects became really important when we started using Hibernate for our Object Relational Mapping (ORM). Basically database tables become classes, and records from those tables are instances of the class.  This behavior is second-nature to Java.

I realize there are probably frameworks for PHP that can do everything above, and I know there are some that claim to be faster/easier/prettier/more aerodynamic/better husband material/etc., but at least for our team, Java seems like it was the right way to go.

Comic-Con 2013

Finally here in San Diego to start my new job with Qualcomm, I visited Comic-Con International and saw some pretty interesting sights.  My particular favorites were everything Minecraft or The Walking Dead-related.

Wifi-based trilateration on Android

404px-Sea_island_surveyTriangulation offers a way to locate yourself in space.  Cartographers in the 1600s originally used the technique to measure things like the height of the cliff, which would be too impractical to measure directly.  Later, triangulation evolved into an early navigation system when Dutch mathematician Willebrord Snell discovered three points can be used to locate a point on a map.

While triangulation uses angles to locate points, trilateration uses lateral distances.  If we know the positions of three points P1P2, and P3, as well as our distance from each of the points, r1r2, and r3; we can look at the overlapping circles formed to estimate where we are relative to the three points. We can even extend the technique to 3D, finding the intersecting region of spheres surrounding the points.

In this project, I’d like to show how we can use the Wifi signal strength, in dB, to approximate distance from a wireless access point (AP) or router.  Once we have this distance, we can create a circle surrounding an AP to show possible locations we might occupy.  In the next part of the project, I plan to show how we can use three APs to estimate our position in a plane using concepts of trilateration. (Note: I haven’t had time to implement this, but you can use this Wiki article to implement it yourself).

Trilateration using 3 access points providing a very precise position (a) and a rougher estimate (b)

Trilateration using 3 access points providing a very precise position (a) and a rougher estimate (b)

Determining distance from decibel level

There’s a useful concept in physics that lets us mathematically relate the signal level in dB to a real-world distance.  Free-space path loss (FSPL) characterizes how the wireless signal degrades over distance (following an inverse square law):

Screen Shot 2013-07-05 at 2.36.07 PM

The constant there, 92.45, varies depending on the units you’re using for other measurements (right now it’s using GHz for frequency and km for distance).  For my application I used the recommended constant -27.55, which treats frequency in MHz and distance in meters (m).  We can re-arrange the equation to solve for d, in Java:

Now, there are few drawbacks to this rough approximation:

  1. FSPL explicitly requires “free space” for calculation, while most Wifi signals are obstructed by walls and other materials.
  2. Ideally, we will want to sample the signal strength many times (10+) to account for varying interference.

Problem (1) will be resolved in the future by using the signal-to-noise ratio to more accurately estimate (that sounds like an oxymoron) obstructions to the wifi signal.  Problem (2) can be implemented in code by sampling many times and computing the average signal level.

Using the above code along with Android’s WifiManager and ScanResult classes, I can print out our final measurements:

And we can get back data that appears to be correct when moving further away from my test router (MAC address: 84:1b:5e:2c:76:f2):

[Image lost during host transition, but basically just showed how the distance increased]

Quickie: Which way does gravity point?


Everyone knows a compass always points north, and most people know it’s because of magnetic fields present on Earth’s surface.  There’s another force here on Earth directed to a central point, and that’s gravity.  Humans are quite adept at sensing gravity thanks to equilibrioception, where  fluid contained in structures in our inner ear provide feedback to help us stay balanced.

But machines, too, can detect gravity thanks to the simple accelerometer.  Already present in most smartphones today, accelerometers react to gravity with tiny springs, creating a voltage difference that we can measure and turn into meaningful units.

On Android, we can easily read the accelerometer data:

Using accelerometers to emulate human’s perception of gravity

I’d like to show how we can use an Android phone (even my dusty old Droid Eris) to visualize the force of gravity.  To save time, we’re only going to use two dimensions, x and y, but the technique used here can easily be extended into 3D.

Let’s represent gravity the same way students in a high school physics class would — with an arrow pointing down.  The goal would be the ability to rotate the phone (changing the x and y position), while still having that arrow point down, illustrating the direction of gravity.

The first thing we’ll need to do is convert the rectangular coordinates given to us (x and y) to a polar system (r, θ), where extracting an angle is much easier.

Thinking back to high school geometry, the inverse tangent will provide that angle directly.  Java has a built-in method, atan2(), which even gracefully handles the divide-by-zero case when x = 0. Because the image rotation I’m using is based on degrees (more on that in a moment), we can convert the radian angle to a common degree (0-360°).

That gives us the degree rotation of the phone in 2D.  We’re almost there.  To determine the degree that we would like the gravity arrow to point, we need to offset that degree, modulo 360 to keep us within the range (0-360°):

Now it’s just a matter of re-drawing the arrow image on the screen.  Android offers some fancy animation techniques, but for this quickie project, I chose to use a matrix rotation:

With that code in place, we can finally visualize the force of gravity, at least in two dimensions:

This project was a quick one (writing this blog entry actually took longer than the code itself), but I think it’s important to show how we can figuratively “teach” a device a human trait and give them a new skill.  For instance, with a faster refresh rate and perhaps a little more accuracy, a robot can use this technique to keep itself balanced, much like humans use information from gravitational forces to stay balanced.

Github available here.