31 notes
“Bored with all those holiday parties where category theory is scarcely mentioned, if at all? Well hold on to your hats, because this party is NOT CONTAINED IN THAT SET.”

Holiday Party - ny-scala (New York , NY) - Meetup



2 notes
“A diverse study group to explore Haskell, the most powerful programming language yet. Cause programming is hard and cats are busy but if they can do it so can you.”

Haskell_For_Cats (New York , NY) - Meetup



4 notes

Here’s those slides.

makingmeetup:

On Monday we hosted a tech talk on what’s coming, and why, in Meetup 2 for Android.

Leave a comment here if you have any questions. And if you’re interested in helping us bring people together on Android and other platforms, we’re hiring.

This post was reblogged from Making Meetup.



2 notes

Design matters, on Android

Tonight we hosted an Android meetup at Meetup. I took this fairly bad photo of Jimena, Mike, and John, from the back of the room on a Nexus 5. For the conditions, I think the 5 did pretty well.

They talked about the progress we’re making in rebuilding the mobile apps as actual social networking apps. Our apps were originally designed as calendars only; we bolted on social and group features one by one, muddling navigation on Android in the process. This project has been about producing a coherent navigation scheme that can better expose current features and make some room for personalized content to come.

If slides are posted I’ll link to them, but you’ll also be able to install the new app yourself in the next few weeks.

Unfiltered and Slick

People sometimes ask for example “crud” webapps for Unfiltered. I’ve never made one because a) it’s boring and b) most people use Unfiltered to make APIs. But we had Jan Christopher Vogt for our September meetup and I was reminded how much I liked Slick (or how much I liked ScalaQuery, what it was called when I last used it).

Making a Slick app would not be boring. For even more interestingness and less future-irrelevance I made it touch friendly. Looks like this:

Breeds

It’s a database for you to enter dog breeds and famous dogs of that breed. Super useful. If you want to update and delete, you have to write the code for that. Ruff ruff.

The unfiltered-slick.g8 template is pretty simple. I adopted the DB setup code (and ASCII art!) from Chris’s ny-scala demo project.

If you want to try it out, install giter8 and run this:

g8 unfiltered/unfiltered-slick

My favorite thing in there is this little function that makes a directive for loading data into a Breed instance:

def breedOfId(id: Int)(implicit session: Session) =
  getOrElse(
    Breeds.ofId(id).firstOption,
    NotFound ~> ResponseString("Breed not found")
  )

This is built with Slick 2.0.0-M2. It uses H2 in-memory and I haven’t tested it with anything else. If I didn’t do something the slickest way, send a pull request.



1 note

Recentralizing the Internet

A few weeks ago I was minding my own business, flicking through twitter on my phone. Someone had linked to the appalling government surveillance story of the day and I was preparing to feel disgusted and helpless. Only this time, my mobile data provider took offense before I had a chance to.

This site contains offensive content

Well, that’s one way to keep the people in line! I let the linker know that the entire privacysos.org site was blocked by my service provider, but it turns out the blockage was a little more complicated than that. Most people using T-Mobile weren’t having problems. And when I tried the same site using Chrome instead of Firefox, it loaded normally. What gives?

Working in Chrome

I figured that parental control settings were the differing factor between me and other customers, and indeed that seems to be the case. Though I never turned on a filter for my account, my phone company assumes I’m a “young adult” at that special age 17-18 when one must be sheltered from information about civil rights. Pre-paid customers are presumed to be juveniles, while grownups who pay more for traditional subscriptions get unimpeded net access (for now).

At that point I could have adjusted my account settings, but I still wanted to know why the site wasn’t blocked in Chrome, and why an ACLU site was considered “offensive content” in the first place.

Digging into the Chrome question, I found one clue: the site was blocked in incognito mode, but not otherwise. A significant feature of incognito mode is that browser cookies from other sessions are not sent with requests — could it be that some cookie, like a login cookie for the T-Mobile account management site, caused the filterware to back off?

That shouldn’t have been the case, since cookies are only sent for the domains they belong to. A T-Mobile login cookie shouldn’t be sent with a request to privacysos.org, so how would the filter know to handle the request differently? Still, this was the best guess I had. I wanted to know exactly what Chrome was sending with those requests, and since they weren’t passing through wifi or any other network under my control, I couldn’t use Wireshark. Instead, I set up Chrome remote debugging.

And finally, in the network inspector, I spotted the gremlin: Google’s Data Compression Proxy.

via:1.1 Chrome Compression Proxy, 1.1 Chrome Compression Proxy

Aha. T-Mobile had been cut out of the web traffic filtering business as a side effect of Google’s own web traffic optimizing business. To test this theory, I looked for a switch to turn off Google proxy. But surprisingly, it just wasn’t there.

Google has started to seed this option into the Android Chrome application as a split test and most likely I did agree to turn it on at some point, but in my case the settings toggle which should appear afterwards, didn’t. I spent some time clearing app caches, uninstalling, and reinstalling — nothing caused the option to appear. Eventually I installed Chrome Beta, where the proxy option does reliably appear under the oblique label “Reduce data usage”. In addition to reducing data usage, I was able to confirm that it handily circumvents T-Mobile’s primitive content filtering.

But don’t break out the champagne just yet, 17-18 year olds! While I appreciate that Google’s proxy is engineered to improve performance generally (like other proxies before it), it would be foolish to ignore that it is also a filter.

Help us help you

All I can really do here is change masters, from one single point of control to another. Indeed, Google’s proxy is disabled when in incognito mode — how is a “secure” mode unsuitable for private browsing?

Ultimately this isn’t a choice between different levels of privacy, but a choice between different vectors of exposure.

A lot of people still trust Google, with some justification. But as with any transfer of power we should consider its implications not just for the current regime but to the next one that will presume to inherit it, and the one after that. If Google is slightly more “evil” every year, how do we feel about Google having full knowledge and control over our web browsing in n years?

Google’s proxy stands to control increasing portions of web traffic, eventually majorities. We can chuckle (and I do) at how it thwarts a crusty old phone company’s content filter without even trying, but there will a come a day when a carrier refuses to allow Chrome as a default browser on their crapware phones unless their own content filtering is integrated with Google’s. And then what?

Having solved the mystery of Chrome, I went back to my phone company and asked why they were blocking an ACLU web site as “offensive.” They of course asked me to email some blackhole instead of making my requests in broad daylight. So I did that.

To: contentcontrol2@t-mobile.com
Subject: unblock request

Hi, I noticed that this site is blocked from “young adults” for its “offensive content”: http://privacysos.org/

The site is published by the ACLU of Massachusetts and has information about privacy rights online. It does not have any offensive content that I have been able to discover. Could you correct this?

Nathan

No one replied to my email, and privacysos.org remains blocked to 17 year olds — or more specifically and ominously, it is not considered to be “content suitable for age 17 and up”. As such it’s likely blocked for far more people in the and up category, normal old people who haven’t taken a deep dive into their account settings to assert their adulthood multiple times.

Having satisfied my curiosity I finally did turn off T-Mobile’s sex/ACLU filter, but to do so I had to “prove” I’m at least 18 unwholesome years old by giving my name, address, and part of my social security number. So much for “you restrict access to adult web content on your family’s T-Mobile phones” — this step’s only purpose is to prevent young account holders themselves from disabling the filter.

Like all censorship schemes T-Mobile’s is ruled by prejudice rather than consensus — it is “not foolproof”, in their cute phrasing. The first and only thing it has blocked for me is information that 17 year olds ought to know as they prepare to accept the responsibility to vote: their basic rights as citizens.

Untupled

Unfiltered validation has been on my to-do list for as long as there’s been an Unfiltered. I tried a few different approaches, none of which were good enough to move it to-done.

First I tried doing it with extractors, the golden hammer of Unfiltered.

Requests are commonly accompanied by named parameters, either in the query string of the URL or in the body of a POST. Unfiltered supports access to these parameters with the Params extractor.

Extracting Params

The results were unsatisfying. If you want typed results, and you probably do, the only thing you can do with unacceptable input is refuse to match it, typically responding with a 404.

I never expected the extractor approach to work out, but the extractors are easy to build and easy to use — for mediocre results. My grander scheme was to support parameter validation such that it would be easy to these things:

  1. build your own reusable validators.
  2. build your own validators inline.
  3. respond to multiple unacceptable parameters with multiple error messages.

And I tried for a while to do this by hacking for-expressions. It was difficult to build, and difficult to use. I didn’t use it much myself and periodically forgot how it worked. Something kept me from ever documenting it. Common sense?

Thanks Norway!

Later, Jan-Anders Teigen contributed directives to Unfiltered. Directives use for-expressions to validate requests in a straightforward way. Missing a header we require? Respond with the appropriate status code. You can also use them for routing, by orElse-ing through non-fatal failures. What you couldn’t do was accumulate errors, my requirement #3.

I had a week with no internet in the Adirondacks, and it seemed as good a time as any to face my old nemesis, parameter validation for the people.

Thinking clearly

The first thing I did was to build into directives a syntax for interpreting parameters into any type and producing errors when interpretation fails.

This took some time to get right, mostly choosing what to call things and crafting an API for both explicit and implicit use. Define your own implicit interpreters to produce your own types and error responses, then you can collect data like a lovesick NSA officer.

val result = for {
  device <- data.as.Required[Device] named "udid"
} yield device.location

This could report failure in different ways according to your own interpreter; the udid parameter is missing, not in the right format, isn’t in the database, the database is down, and so on.

Snake in the grass

After this I decided to tackle my dreaded requirement #3. This time I wouldn’t abuse for-expressions or change the way directives fundamentally work. When directives are flat-mapped together, the result is a mapping of all their successes or the first failure.

So how do we produce multiple responses for multiple failures? I settled on the idea of combining multiple directives into one, which would itself produce a mapping of all their successes or a combination of all their failures. This combined directive would then, typically, be flat-mapped to other directives in a for-expression.

The combination step is easy enough to express, even if it was a little tricky to implement.

scala> (data.as.Required[String] named "a") &
  (data.as.Required[Int] named "b") 
res1: unfiltered.directives.Directive[
  Any,
  unfiltered.directives.JoiningResponseFunction[String,Any],
  (String, Int)] = <function1>

The first type parameter of Directive has to do with the underlying request, the second is the joined error response type, and the third is the success type — a tuple of the two directives’ success types.

The joining method & produces a tuple so that successes preserve all their type information. We might use it in a for expression like so:

(a, b) <- (data.as.Required[String] named "a") &
  (data.as.Required[Int] named "b") 

But what happens if there’s more than one independent parameter?

scala> (data.as.Required[String] named "a") &
  (data.as.Required[Int] named "b") &
  (data.as.Required[BigInt] named "c")
res2: unfiltered.directives.Directive[
  Any,
  unfiltered.directives.JoiningResponseFunction[String,Any],
  ((String, Int), BigInt)] = <function1>

Oh dear — our tuples are nested. To assign the values now, we would need to nest tuples exactly the same.

((a, b), c) <- (data.as.Required[String] named "a") &
  (data.as.Required[Int] named "b") &
  (data.as.Required[BigInt] named "c")

This could get rather confusing, especially when we want to add or remove a parameter later.

Nesting

To understand why the tuples are nested, think about what the & method does. For d1 & d2, it produces a new directive where the success values are a tupled pair. It does this always, according to its return type.

Now consider the case of three directives: d1 & d2 & d3. We could write it without infix notation: d1.&(d2).&(d3). It’s clearer still with parenthesis grouping the infix operations in their normal order of evaluation: ((d1 & d2) & d3). With repeated applications of & we’ll necessarily produce typed, nested pairs. You can see a simpler example with the standard library’s tuple constructor:

scala> 1 -> 2 -> 3
res3: ((Int, Int), Int) = ((1,2),3)

So this makes sense, even if we don’t like it. We’d rather access the results as if the structure were flat. A Seq would allow that, but we’d lose the component types. Another approach, would be to apply a single joining function across all the directives we want to combine:

a, b, c <- &(data.as.Required[String] named "a"),
  (data.as.Required[Int] named "b"),
  (data.as.Required[BigInt] named "c")

This looks pretty nice, but it comes at a high cost: the function & would have to be defined specifically for 2 arguments, 3 arguments, and so on up to 22. And if somebody wanted to use it for 23 independent parameters, too bad. You’ll see that kind of code in some libraries including the standard library; I think it’s usually generated. But I really didn’t want to add it to Unfiltered if I didn’t have to.

And I didn’t have to.

In Scala, common data types are built into the standard library instead of the language. So we can do things like this without List being special, or known to the compiler at all.

scala> val a :: b :: c :: Nil = 1 :: 2 :: 3 :: Nil
a: Int = 1
b: Int = 2
c: Int = 3

In order to support rich user-defined interfaces comparable to language-level features in other languages, the Scala language itself has features like infix notation that are surprising to the newcomer. This is all Scala 101, but on occasion I’m still impressed by the possibilities that basic design decision gives to the programmer.

Let’s try our list example again, with grouping parenthesis.

scala> val (a :: (b :: (c :: Nil))) = (1 :: (2 :: (3 :: Nil)))
a: Int = 1
b: Int = 2
c: Int = 3

Because of the right-associativity of methods ending with a colon, the nesting is reversed, but you can probably see that a solution to flattening our nested tuples is getting closer.

The standard library provides a :: case class as a helper to the :: method of List, and like all case classes it has a companion extractor object. The above relies on infix notation for the extractor; the constructor style makes it a little more plain.

scala> val ::(a, ::(b, ::(c, Nil))) = (1 :: (2 :: (3 :: Nil)))
a: Int = 1
b: Int = 2
c: Int = 3

That’s just how you expect case class extraction to work. On the right-hand side, :: is a method call on a list but its definition constructs the same case class. See for yourself. So actually, it’s more like the standard library provides the method as a helper to the case class.

That’s great for lists and we’d like to do the same for a pair, but we can’t use the same case class technique since Tuple2 is itself a case class. Not to worry, even though case classes are a language feature, the extractor functionality they use is available to any object. (It was a short holiday for this golden hammer.) We’ll call ours & since we want it to partner with the & method of Directive.

scala> object & {
     |   def unapply[A,B](tup: Tuple2[A,B]) = Some(tup)
     | }
defined module $amp

Let’s try it out with modest constructor notation first.

scala> val &(a,b) = (1,2)
a: Int = 1
b: Int = 2

Infix?

scala> val a & b = (1,2)
a: Int = 1
b: Int = 2

Nesting???

scala> val a & b & c = ((1,2),3)
a: Int = 1
b: Int = 2
c: Int = 3

Sweet!!!

Nested

Now we can assign arbitrarily many nested success values from combined directives in simple flat statement.

a & b & c <- (data.as.Required[String] named "a") &
  (data.as.Required[Int] named "b") &
  (data.as.Required[BigInt] named "c")

And that is basically how it works. One caveat is that Unfiltered already had a & extractor object for pattern matching on requests, but I was able to overload the unapply method without issue — once again, I’m rescued by the soundness of Scala’s core features.

You might wonder why I didn’t name it ->. And indeed, I could have.

scala> object -> {
     | def unapply[A,B](tup: Tuple2[A,B]) = Some(tup)
     | }
defined module $minus$greater

scala> val a -> b -> c = 1 -> 2.0 -> "3"
a: Int = 1
b: Double = 2.0
c: String = 3

Isn’t that pretty? I don’t know why this isn’t defined in the standard library already, perhaps it’s been discussed before. I think it would be useful, and I would have used that here instead of defining my own extractor. It would promote the pattern of nesting tuples, rather than defining 22 methods while taking a shot of vodka after each one.

(I am also aware, vaguely, that I have wondered onto ground inhabited by HLists. Don’t shoot! I come in peace.)

In any case, an object -> is not in the standard library. I don’t want to invite the possibility of a collision, should it be added later. And also, if nested tuples and -> extractors were a common pattern, it would be one thing. Since they are not yet, I don’t want to have to explain to people why & is used to join directives while a -> extractor is used to split them.

All told, I’m very pleased with the resulting API for parameter directives, and hope people will get good use out of it. Directives are now fully documented and Unfiltered parameter validation is finally done.

“Serge was acquitted via the 2nd Circuit Court of Appeals, and released in February of 2012. (photo above) He has since been re-arrested and is being tried by the state of New York. In the United States we have a thing called double jeopardy — you can’t be tried for the same thing twice. Somehow that doesn’t apply here. Not when Goldman is after you. Sergey Aleynikov faces two felony counts in New York.”

Goldman Sachs sent a brilliant computer scientist to jail over 8MB of open source code uploaded to an SVN repo - garry’s posthaven

Page 1 of 49

}