## On .name and third-level domains

And, we’re back! After being off-line for several weeks, this site is now live again! I can’t imagine you missed it.

Here’s what happened. Let’s start at the beginning. In 2003, ICANN added .name to the list of top-level domains (like .com, .edu, etc.). The idea is that individuals would use it for personal sites and email addresses. You can still do this, but (in case you haven’t noticed), it’s not very popular, and most domain name registrars don’t even sell .name addresses.

I purchased harlan.harris.name in 2003. Unlike .com addresses, you don’t generally buy second-level domains in .name, you buy third-level domains. (.name is the top-level domain, harris.name is the second-level domain, which you can’t buy, and harlan.harris.name is the third-level domain.) A cool feature is that if you buy a.b.name, you can get the email address a@b.name, not something like me@a.b.name (although you can set that up too). So my email address has been harlan@harris.name for ten years.

Fast forward to April, 2013. I notice that my personal web site (where you are now) has been replaced by a generic sales screen. You know, with a bunch of random keywords, a stock photo, and “buy this domain!” in big red print. Not good. At first I thought that my WordPress site (which hosts this blog) had gotten hacked, but no such luck. It turns out to be a convoluted mess of broken technology and confused customer support reps. The fortunate thing is that I don’t use this site extensively, and the problem with the web forwarding didn’t seem to affect my email address forwarding, so I didn’t lose any email.

The simplified version of what happened is that the company I bought the domain from in 2003, PersonalNames, merged with a company called Dotster a year or two ago. They presumably merged their technical systems together, which makes sense. But they for some reason failed to properly set up a system for third-level .name domain administration. And so my account failed to get properly transferred into their systems, and they stopped sending me notices about problems.

Although I still technically owned harlan.harris.name, I could no longer log in and administer it, and the redirection to this web site (at another company, HostGator) was reset at some point for still-unknown reasons.

It took a week and a dozen email messages and several hours on the phone for Dotster to figure out that yes, they owned this domain, but no, they didn’t have the technical chops to administer it.

I then set up an account with another company, eNom (nom, nom…), that does support third-level .name domains. Transferring the domain took another week and three attempts, due to errors on both sides. Add 48 hours for DNS forwarding to propagate around the Internet, and I’m finally back online yesterday!

Except that although my email forwarding still works, I don’t yet have control over that, because Dotster seemingly neglected to transfer email forwarding rights at the same time as the rest of the domain. So if you need me tomorrow, I’ll be back on the phone with tech support.

Categories: Personal Tags:

For those people (or, more likely, 0 or 1 persons) who follow this blog to catch up on my professional thoughts: I’ve been doing a little bit of writing on the Data Community DC blog. Here are all my posts over there: http://datacommunitydc.org/blog/author/harlan/ I’d definitely encourage you to read everyone else’s work on the DC2 blog too!

Two titles of my own:

And three  of others’:

There are also weekly round-up posts on data topics generally, and on data visualization specifically, as well as event previews and reviews, etc.

Categories: Professional Tags:

## Pretzel Whoopie Pies with Vanilla Stout Filling

My newish cooking club had a dinner yesterday with the theme American Beer. I was tasked with dessert, and came up with this recipe for Pretzel Whoopie Pies. They turned out extremely well, so I thought I’d share the recipe here.

Sources:

Ingredients:

• 2 egg yolks
• 1/2 c minus 1 T sugar
• 1 T light corn syrup
• 1/2 c finely ground unsalted mini pretzels
• 1/2 c cake flour
• 1 t baking powder
• 1/8 t salt
• 4 T butter, softened
• 1/3 c milk
• kosher salt
• 1 c stout beer (Breckenridge Vanilla Porter is excellent)
• 4 T butter, softened
• 8 oz powdered sugar

Recipe:

1. Beat egg yolks, sugar, and corn syrup until lightened.
2. Mix pretzel flour, cake flour, baking powder, and salt. Beat in butter and milk. Add to liquid mixture and mix thoroughly.
3. Refrigerate dough for 30 minutes to hydrate evenly. Preheat oven to 350 F.
4. Drop 12 evenly-shaped cookies onto a silpat-covered pan. A ring mold is helpful.  Sprinkle kosher salt over the top lightly.
5. Bake about 14 minutes, until starting to brown around the edges, but still soft.
6. Cool thoroughly on wire racks.
7. Boil beer in a saucepan large enough to deal with foaming up, and reduce to 1/2 c. Cool to room temperature.
8. Mix butter, powdered sugar, and reduced beer into a creamy, delicious frosting.
9. Make sandwiches out of cookies and frosting.

Makes 6 whoopie pies.

Categories: Personal Tags:

## Communication and the Data Scientist

I recently gave a presentation on communication issues around the terms “Data Science” and “Data Scientist”, based in part on a survey that I did with my Meetup colleagues Marck and Sean. The basic idea is that these new, extremely-broad buzzwords have resulted in confusion, which has impacted the ability of people with skills and people with data to meet and effectively communicate about who does what and what appropriate expectations should be. The survey was an attempt to bring some clarity to the issue of who are the people in this newly-reformulated community, and how do they view themselves and their skills. For more on the survey, see our post on the Data Community DC blog. Here’s the video of my presentation at DataGotham:

Categories: Personal Tags:

## integrating R with other systems

I just returned from the useR! 2012 conference for developers and users of R. One of the common themes to many of the presentations was integration of R-based statistical systems with other systems, be they other programming languages, web systems, or enterprise data systems. Some highlights for me were an update to Rserve that includes 1-stop web services, and a presentation on ESB integration. Although I didn’t see it discussed, the new httr package for easier access to web services is also another outstanding development in integrating R into large-scale systems.

Coincidentally, I just a week or so ago had given a short presentation to the local R Meetup entitled “Annotating Enterprise Data from an R Server.” The topic for the evening was “R in the Enterprise,” and others talked about generating large, automated reports with knitr, and using RPy2 to integrate R into a Python-based web system. I talked about my experiences building and deploying a predictive system, using the corporate database as the common link. Here are the slides:

Categories: Professional Tags:

## Survey of Data Science / Analytics / Big Data / Applied Stats / Machine Learning etc. Practitioners

As I’ve discussed here before, there is a debate raging (ok, maybe not raging) about terms such as “data science”, “analytics”, “data mining”, and “big data”. What do they mean, how do they overlap, and perhaps most importantly, who are the people who work in these fields?

Along with two other DC-area Data Scientists, Marck Vaisman and Sean Murphy, I’ve put together a survey to explore some of these issues. Help us quantitatively understand the space of data-related skills and careers by participating!

It should take 10 minutes or less, data will be kept confidential, and we look forward to sharing our results and insights in a variety of venues, including this blog! Thanks!

Categories: Professional Tags:

## Apple TV and cross-device user-interface integration

On last week’s Build and Analyze — a great podcast nominally about iOS development, but actually more about just living a tech-geek lifestyle — Marco talked a lot about the rumored “Apple TV” and whether it could actually be a groundbreaking product. He concluded that it probably couldn’t. Most people wouldn’t dump a working TV just for an Apple brand; the touch-screen interface that Apple has been using for the iPad and iPhone wouldn’t work for a TV; the only apps that would work well on a TV would be just ways of getting better content (I note that Roku apps are laughable, with the exception of Angry Birds); getting access to better content than other competitors is probably impossible, even for Apple.

For these reasons and more, Marco suggested that there’s little that Apple, or anyone else, could do to substantially improve the TV experience, with the exception of better menu design.

I think there’s a way that Apple (or someone) could integrate modern technology into a TV that would be actually compelling, though. And in some ways it’s the same way that I earlier blogged about for MP3 players. Cross-device user-interfaces. Here’s how it might work for a television:

Categories: Personal Tags:

## Data Science, Moore’s Law, and Moneyball

I’m fond of navel gazing, meta discussions, and so forth. I’ve recently written about inferring navel gazing from link data, and about the meaning of the “Analytics” buzzword. This post will be my second on that other infectious buzzword, “Data Science”.

When I moved to Washington DC in July, I was struck by the fact that there was no Meetup for analytics/applied statistics/machine learning/data science. There’s a great DC Tech Meetup, a great Big Data Meetup, and a great R Meetup, but nothing like the NYC Predictive Analytics Meetup. So, I and a couple of others I talked to about this (Marck Vaisman, who I first met through the NYC R Meetup a couple years ago, and Matt Bryan, who I met just after moving to town), started a new Meetup, which we decided to call “Data Science DC“.

For our second meetup, we thought we should address some aspect of our name, and so I presented a little bit about the term and the controversies around its definition and its recent dramatic upsurge in popularity. Here are the slides (note that you should be able to click through the links on the slide to the source documents):

I mostly didn’t present a personal opinion about what I though the term means, or what it should mean, but instead wanted to present a bunch of other peoples’ points of view to kick off an interesting discussion. And in that sense I succeeded. We had an exceedingly interesting conversation following my slides, and I think a couple of the most interesting ideas from the evening came out of that discussion.

Here are three theses I’d like to propose.

1. “Data Science” is defined as what “Data Scientists” do. What Data Scientists do has been very well covered, and it runs the gamut from data collection and munging, through application of statistics and machine learning and related techniques, to interpretation, communication, and visualization of the results. Who Data Scientists are may be the more fundamental question.
2. One reason Data Science is a big thing now is because advances in technology have made it easy for Data Scientists to develop wide-ranging expertise. Even 10 years ago, the idea that the same person could integrate several databases, run a multilevel regression, and generate elegant visualizations would be seen as incredibly rare.
3. The other reason Data Science is a big thing now is because sabermetrics demonstrated that number-crunching brings results. There’s nothing business leaders love more than a sports analogy, and the analytic revolution in professional sports immediately draw attention to the ways that numbers beat intuition.

I tend to like the idea that Data Science is defined by its practitioners, that it’s a career path rather than a category of activities. In my conversations with people, it seems that people who consider themselves Data Scientists typically have eclectic career paths, that might in some ways seem not to make much sense. A typical path might be someone who started out learning to program, then spent some time in a scientific field, then hopped around a variety of different roles, collecting a wide variety of different skills, all of which related to using analytical techniques to make sense of data.

This sort of career path isn’t particularly new, but what is new is that it’s now possible to relatively quickly and cheaply do get started in all of the processes involved in Data Science. (Thanks to Taylor Horton for suggesting this at the Meetup!) Fast computers, open source tools, and some programming skills allow someone to try a new data management approach or a new machine learning technology incredibly quickly, and to iterate on approaches until a solution to a particular problem is found. This has two consequences. First of all, the productivity of a modern Data Scientist is remarkable. Projects that a few decades ago would have taken teams of people literally years can now be done in a few days. Second of all, this amazing productivity allows people to spend their 10,000 hours developing expertise in the now vertically integrated process of Data Science, rather than having to spend all of that time focusing on developing skills on just a single aspect of the task. There are huge number of things that need to be learned to be an effective Data Scientist, but it is now possible to learn those skills quickly enough to make a career out of being a Jack of all Trades and a near-master at many of them.

So now there’s a supply of people who could be Data Scientists. But what about the type of demand that drives an incessant stream of O’Reilly articles and job postings? Where does the demand come from? Justin Grimes had an intriguing idea that resonated with me — analytics in sports, which I propose as the other reason why analytics and data science have become buzzwords. Although business has used mathematical methods for 100 years, (thousands if you include finance and insurance) the idea that you could hire a very small number of people to analyze data and beat gut instincts in many aspects of decision making is much newer. The idea that a statistician could turn around the Oakland A’s by radically overturning longstanding recruiting practice was a powerful analogy. Even now, business books about analytics almost always have sports examples in the first chapters. I made a point at work the other day by noting that most professional sports prognosticators predict NFL playoff outcomes wrong because they over-weight last years’ results. Sports analogies get attention.

Does this make sense? Data Science is a buzzword now because a group of people with eclectic talents match a growing demand for and recognition of the value of those talents. I’d love feedback on these thoughts!

Categories: Professional Tags:

July 30th, 2011 1 comment

This past Friday, the web portal to the US Federal government, USA.gov, organized hackathons across the US for programmers and data scientists to work with and analyze the data from their link-shortening service. It turns out that if you shorten a web link with bit.ly, the shortened link looks like 1.usa.gov/V6NpL (that one goes to a NASA page). And because this service was paid for by taxpayer money, the data about each clickthrough is freely available.

Shortened-link click-through data is interesting. It tells you the time and approximate geographic location of each click-through, and the web page or service that the link was on (assuming someone didn’t type the URL in by hand). You also know when the shortened link was created, which tells you a little bit about the way links are shared. Bit.ly themselves have several full time data scientists on staff whose job is to learn about what shortened-link data can say about web traffic patterns and link sharing, potentially very lucrative information.

For my part, I just wanted to do some fun visualizations. Along with friends in NYC, I joined the hackathon remotely, following along on twitter and listening to dance music in their turntable.fm room. I managed to get rough drafts of two somewhat non-trivial graphs done during the official hackathon, and I re-built them with larger and more random data later.

This first graph looks at the difference in time between when a link was created (the first time someone tried to shorten the target URL) and when the clickthroughs happened. For each of the 25 most frequently visited target domains (mostly US government agencies), I built a density plot, or smoothed histogram, of the timings.

(click for a larger image) There are some interesting differences. Links from senate.gov are mostly clicked through within a few hours of their creation, and links from the NY Courts are clicked through in less than an hour. There appear to be links to NOAA and the State of California pages that are frequently clicked through hundreds of days after their creation. It would be interesting to dive into the content of the target pages, categorize them, and learn what causes these differences.

Speaking of diving into the content, I did a very simple version of that next. When clicking a link to a government web page, are people looking for information about their hometown? Fortunately, clickthrough data includes geocode information for the clicker’s IP address, which includes the nearest city. I decided to find out by scraping the text content of the 100 most frequently accessed web pages, and detected whether or not each city was in each web page.

(again, click for larger image) This “navel-gazers” plot shows the summarized results. For each city in the data set with more than 5 clickthroughs, I plotted the raw number of clickthroughs from that city (the X axis) against the proportion of clickthroughs that ended up on a web page with the name of the city in it (the Y axis). Many cities are clustered in the lower-left, with few clicks and no instances of their city on the target page. Large cities like New York and London are far to the right, as expected from their population, and they show up in target web pages occasionally. Washington (DC) is both a frequent clicker of shortened links, as well as a city that tends to show up on web pages, unsurprising given that it is the seat of the Federal government. The exceptions are the most interesting. People in Bangalore clicked through more than 15 times in this sample, and about 12% of their clicks were to pages with the name of their city. In Boulder, a quarter of the 12 or so clicks mentioned their town!

Deeper analysis would be needed to explain these results, but they were fun to put together! For those interested in checking out my work, including R code to pull a sample of 1.usa.gov data from the archives, please check out my repository on GitHub: https://github.com/HarlanH/hackathon-1usagov

Categories: Professional Tags:

## making meat shares more efficient with R and Symphony

In my previous post, I motivated a web application that would allow small-scale sustainable meat producers to sell directly to consumers using a meat share approach, using constrained optimization techniques to maximize utility for everyone involved. In this post, I’ll walk through some R code that I wrote to demonstrate the technique on a small scale.

Although the problem is set up in R, the actual mathematical optimization is done by Symphony, an open-source mixed-integer solver that’s part of the COIN-OR project. (The problem of optimizing assignments, in this case of cuts of meat to people, is an integer planning problem, because the solution involves assigning either 0 or 1 of each cut to each person. More generally, linear programming and related optimization frameworks allow solving for real-numbered variables.) The RSymphony package allows problems set up in R to be solved by the C/C++ Symphony code with little hassle.

My code is in a public github repository called groupmeat-demo, and the demo code discussed here is in the subset_test.R file. (The other stuff in the repo is an unfinished version of a larger-scale demo with slightly more realistic data.)

For this toy problem, we want to optimally assign 6 items to 3 people, each of whom have a different utility (value) for each item. In this case, I’m ignoring any fixed utility, such as cost in dollars, but that could be added into the formulation. Additionally, assume that items #1 and #2 cannot both be assigned, as with pork loin and pork chops.

This sort of problem is fairly simple to define mathematically. To set up the problem in code, I’ll need to create some matrices that are used in the computation. Briefly, the goal is to maximize an objective expression, $\mathbf{c}^T\mathbf{x}$, where the $\mathbf{x}$ are variables that will be 0 or 1, indicating an assignment or non-assignment, and the $\mathbf{c}$ is a coefficient vector representing the utilities of assigning each item to each person. Here, there are 6 items for 3 people, so I’ll have a 6×3 matrix, flattened to an 18-vector. The goal will be to find 0′s and 1′s for $\mathbf{x}$ that maximize the whole expression.

Here’s what the $\mathbf{c}$ matrix looks like:

      pers1 pers2  pers3
item1 0.467 0.221 0.2151
item2 0.030 0.252 0.4979
item3 0.019 0.033 0.0304
item4 0.043 0.348 0.0158
item5 0.414 0.050 0.0096
item6 0.029 0.095 0.2311

It appears as if everyone like item1, but only person1 likes item5.

Additionally, I need to define some constraints. For starters, it makes no sense to assign an item to more than one person. So, for each row of that matrix, the sum of the variables (not the utilities) must be 1, or maybe 0 (if that item is not assigned). I’ll create a constraint matrix, where each row contains 18 columns, and the pattern of 0′s and 1′s defines a row of the assignment matrix. Since there are 6 items, there are 6 rows (for now). Each row needs to be less than or equal to one (I’ll tell the solver to use integers only later), so I also define vectors of inequality symbols and right-hand-sides.

?View Code RSLANG
 # for each item/row, enforce that the sum of indicators for its assignment are <= 1 mat <- laply(1:num.items, function(ii) { x <- mat.0; x[ii, ] <- 1; as.double(x) }) dir <- rep('<=', num.items) rhs <- rep(1, num.items)

To add the loin/chops constraint, I need to add another row, specifying that the sum of the indicators for both rows now must be 1 or less as well.

?View Code RSLANG
 # for rows 1 and 2, enforce that the sum of indicators for their assignments are <= 1 mat <- rbind(mat, matrix(matrix(c(1, 1, rep(0, num.items-2)), nrow=num.items, ncol=num.pers), nrow=1)) dir <- c(dir, '<=') rhs <- c(rhs, 1)

Here’s what those matrices and vectors look like:

> mat
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
[1,] 1 0 0 0 0 0 1 0 0  0  0  0  1  0  0  0  0  0
[2,] 0 1 0 0 0 0 0 1 0  0  0  0  0  1  0  0  0  0
[3,] 0 0 1 0 0 0 0 0 1  0  0  0  0  0  1  0  0  0
[4,] 0 0 0 1 0 0 0 0 0  1  0  0  0  0  0  1  0  0
[5,] 0 0 0 0 1 0 0 0 0  0  1  0  0  0  0  0  1  0
[6,] 0 0 0 0 0 1 0 0 0  0  0  1  0  0  0  0  0  1
[7,] 1 1 0 0 0 0 1 1 0  0  0  0  1  1  0  0  0  0
> dir
[1] "<=" "<=" "<=" "<=" "<=" "<=" "<="
> rhs
[1] 1 1 1 1 1 1 1


Finally, specify that the variables must be binary (0 or 1), and call SYMPHONY to solve the problem:

?View Code RSLANG
 # this is an IP problem, for now types <- rep('B', num.items * num.pers) max <- TRUE # maximizing utility   soln <- Rsymphony_solve_LP(obj, mat, dir, rhs, types=types, max=max)

And, with a bit of post-processing to recover matrices from vectors, here’s the result:

$solution [1] 0 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1$objval
[1] 1.52

$status TM_OPTIMAL_SOLUTION_FOUND 0 Person #1 got Items 5 worth 0.41 Person #2 got Items 3, 4 worth 0.38 Person #3 got Items 2, 6 worth 0.73 So that’s great. It found an optimal solution worth more than 50% more than the expected value of a random assignment. But there’s a problem. There’s no guarantee that everyone gets anything, and in this case, person #3 gets almost twice as much utility as person #2. Unfair! We need to enforce an additional constraint, that the difference between the maximum utility that any one person gets and the minimum utility that any one person gets is not too high. This is sometimes called a parity constraint. Adding parity constraints is a little tricky, but the basic idea here is to add two more variables to the 18 I’ve already defined. These variables are positive real numbers, and they are forced by constraints to be the maximum and minimum total utilities per person. In the objective function, then, they are weighted so that their difference is not to big. So, that expression becomes: $\mathbf{c}^T\mathbf{x} - \lambda x_{19} - - \lambda x^{20}$. The first variable (the maximum utility of any person) is minimized, while the second variable is maximized. The $\lambda$ free parameter defines how much to trade off parity with total utility, and I’ll set it to 1 for now. For the existing rows of the constraint matrix, these new variables get 0′s. But two more rows need to be added, per person, to force their values to be no bigger/smaller (and thus the same as) the maximum/minimum of any person’s assigned utility. ?View Code RSLANG  # now for those upper and lower variables # \forall p, \sum_i u_i x_{i,p} - d.upper \le 0 # \forall p, \sum_i u_i x_{i,p} - d.lower \ge 0 # so, two more rows per person d.constraint <- function(iperson, ul) { # ul = 1 for upper, 0 for lower x <- mat.utility.0 x[, iperson ] <- 1 x <- x * obj.utility c(as.double(x), (if (ul) c(-1,0) else c(0,-1))) } mat <- rbind(mat, maply(expand.grid(iperson=1:num.pers, ul=c(1,0)), d.constraint, .expand=FALSE)) dir <- c(dir, c(rep('<=', num.pers), rep('>=', num.pers))) rhs <- c(rhs, rep(0, num.pers*2)) The constraint inequalities then becomes as follows:  > print(mat, digits=2) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1.00 0.000 0.000 0.000 0.00 0.000 1.00 0.00 0.000 0.00 0.00 0.000 1.00 0.0 0.000 0.000 0.0000 0.00 0 0 0.00 1.000 0.000 0.000 0.00 0.000 0.00 1.00 0.000 0.00 0.00 0.000 0.00 1.0 0.000 0.000 0.0000 0.00 0 0 0.00 0.000 1.000 0.000 0.00 0.000 0.00 0.00 1.000 0.00 0.00 0.000 0.00 0.0 1.000 0.000 0.0000 0.00 0 0 0.00 0.000 0.000 1.000 0.00 0.000 0.00 0.00 0.000 1.00 0.00 0.000 0.00 0.0 0.000 1.000 0.0000 0.00 0 0 0.00 0.000 0.000 0.000 1.00 0.000 0.00 0.00 0.000 0.00 1.00 0.000 0.00 0.0 0.000 0.000 1.0000 0.00 0 0 0.00 0.000 0.000 0.000 0.00 1.000 0.00 0.00 0.000 0.00 0.00 1.000 0.00 0.0 0.000 0.000 0.0000 1.00 0 0 1.00 1.000 0.000 0.000 0.00 0.000 1.00 1.00 0.000 0.00 0.00 0.000 1.00 1.0 0.000 0.000 0.0000 0.00 0 0 0.47 0.030 0.019 0.043 0.41 0.029 0.00 0.00 0.000 0.00 0.00 0.000 0.00 0.0 0.000 0.000 0.0000 0.00 -1 0 0.00 0.000 0.000 0.000 0.00 0.000 0.22 0.25 0.033 0.35 0.05 0.095 0.00 0.0 0.000 0.000 0.0000 0.00 -1 0 0.00 0.000 0.000 0.000 0.00 0.000 0.00 0.00 0.000 0.00 0.00 0.000 0.22 0.5 0.030 0.016 0.0096 0.23 -1 0 0.47 0.030 0.019 0.043 0.41 0.029 0.00 0.00 0.000 0.00 0.00 0.000 0.00 0.0 0.000 0.000 0.0000 0.00 0 -1 0.00 0.000 0.000 0.000 0.00 0.000 0.22 0.25 0.033 0.35 0.05 0.095 0.00 0.0 0.000 0.000 0.0000 0.00 0 -1 0.00 0.000 0.000 0.000 0.00 0.000 0.00 0.00 0.000 0.00 0.00 0.000 0.22 0.5 0.030 0.016 0.0096 0.23 0 -1 > dir [1] "<=" "<=" "<=" "<=" "<=" "<=" "<=" "<=" "<=" "<=" ">=" "<=" "<=" > rhs [1] 1 1 1 1 1 1 1 0 0 0 0 0 0 Looking at just the last row, this constraint says that the sum of the utilities of any assigned items for person #3, minus the lower limit, must be at least 0. That is essentially the definition of the lower limit, that that constraint holds true for all three people in this problem. Similar logic applies for the upper limit. Running the solver with this set of inputs gives the following:  $solution [1] 0.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 1.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 [18] 0.000 0.498 0.433   $objval [1] 1.31$status TM_OPTIMAL_SOLUTION_FOUND 0   Person #1 got Items 3, 5 worth 0.43 Person #2 got Items 4, 6 worth 0.44 Person #3 got Items 2 worth 0.50

The last two numbers in the solution are the values of the upper and lower bounds. Note that the objective value is only 41% higher than a random assignment, but the utilities assigned to each person are much closer. Dropping the $\lambda$ value to something closer to 0 causes the weights of the parity bounds to be less important, and the solution tends to be closer to the initial result.

Scaling this up to include constraints in pricing, farm preferences, price vs. preference meta-preferences, etc., is not conceptually difficult, but would just entail careful programming. It is left as an exercise for the well-motivated reader!

If you’ve made it this far, I’d definitely appreciate any feedback about this idea, corrections to my formulation or code or terminology, etc!

(Thanks to Paul Ruben and others on OR-Exchange, who helped me figure out how to think about the parity problem, and to the authors of WP-codebox and WP LaTeX for giving me tools to put nice scrollable R code and math in this post!)

Categories: Professional Tags: