28 August 2007

A Game-theoretic Framework for Creating Optimal SLA/Contract

Intriguing paper. Game theory is certainly a key tools of negotiations, conflict resolution, optimal cooperation problems, etc. Very interesting to see it applied in a practical manner to IT-related contracts.

Abstract: An SLA/Contract is an agreement between a client and a service provider. It specifies desired levels of service and penalties in case of default. It is of interest from the Service Providers point of view, to determine the optimal contract, that will maximize its utility. In this work we model the situation based on the notion of Moral Hazard: providing a good service is costly and results are affected by the resources involved. As a consequence, a credible contract must fulfill the incentive compatibility constraint. We extend the above model to take into account the possibility that there might different types of clients, and that the Service Provider will offer a menu of contracts intended for each of these clients, as a means of maximizing utility. From the Service Providers point of view, finding an optimal contract will consist of solving a nonlinear optimization problem subject to constraints. We derive conditions under which these constraints will take a simple form and we analyze a scenario, in which, the randomness comes from the Response Time of a given IT Service, and the input is the number of servers that will be dedicated to each client.

25 August 2007

Reading List #6

I'm winding down from vacation so here's another Reading List comprised of three selections obtained from Sherman's Books & Stationary in Freeport, Maine:





Enjoy!

Reading List #5

During a lovely week relaxing in York Beach, Maine, the following books were either started or finished:





Enjoy!

16 August 2007

'Sunk costs' and the war

University of San Francisco economics professor Bruce Wydick has an exceptionally clear op-ed piece in the 15 August 2007 issue of USA Today regarding America's involvement in the Iraq war.

14 August 2007

It's Wikilicious!

The hype du jour is that a clever analysis tool for wikipedia entries and interesting/entertaining sources that edit/massacre/delete entries that pertain to themselves now exists.

* (I don't recall where I read this first today but to keep things on the up-and-up it was either Slashdot or Wired through my feed reader).

It seems that one can data-mine *.wikipedia.org via online search or download of a snapshot image with all previous edits, a la a useful audit trail. OK, that doesn't seem too interesting. Until...one looks at the sources of the edits and the effects and affects of these edits.

This is a perfect example, without drawing attention to myself, of the difference between data and information and the relation between the two. Data is available...it's up to individuals or processes to transform, and hopefully protect, it into information.

13 August 2007

The New Age of Ignorance

As a follow-up to my last post concerning the systemic and intentional attempts to dissuade students from studying mathematics, I thought I would just offer a pointer to this, by way of the Edge, article from The Guardian:

The New Age of Ignorance

We take our young children to science museums, then as they get older we stop. In spite of threats like global warming and avian flu, most adults have very little understanding of how the world works.

Mathematics: Seriously, Just Say "No!"

In a previous post, I commented on a trend in the UK where students were being discouraged from taking "unnecessary," higher level mathematics classes. By way of Slashdot, it seems that is is an issue in Australia as well. The poster states and asks:

"The claim is that Australian schools are actively discouraging students from taking upper level math courses to boost their academic results on school league tables. How widespread is this phenomenon? Are schools taking similar measures in the US and Canada?"


I ask again: Will science education in the classroom be dead by the time my kids go to school?

02 August 2007

Data v. Information

I had fun with a previous post regarding a data visualization tool called many eyes, from IBM alphaWorks Services. There are some nice graphing templates available but pretty graphs simply do not the wonderful experience make. OpenOffice CALC and Microsoft Excel can produce a multitude of graphs in a variety of canned formats but do they really assist in helping one understand the data being presented to them.

Are they capable though, as tools, to transform data into information? The distinction may or may not be a subtle but the implications are huge. We're generally over-run with data and consider so much of it to be throw-away. Information, however - information being data with some sort of context applied to it - one holds onto as long as possible because the context applied to the data, the transform or function applied to some data set, increases the data's value and elevates it to that of information.

Consider a couple of simple examples:

What does this string of data mean, if anything: 011903124555555
  1. Well, it could be a random string of 16 digits and not very interesting (highly likely).
  2. Out-of-country phone dialing number? (yes, US Embassy in Turkey)
  3. Credit card number? (same format for Visa/MasterCard but not a valid number)
  4. USPS/FedEx/UPS/DHL tracking number? (UPS if you drop their "1Z" prefix)
  5. US social security number? (Massachusetts SSN with some cruft appended to the end).
  6. Product SKU (I seem to recall that there are standardized SKU formats)
We just don't know, without any context applied to it. Now, what if we thought about another string of digits in the context of identity theft:

  • 034011234,Last,First,Acct#

Huh...that looks important and maybe should be protected. Maybe it's a person with an account # and MA SSN on-record. The problem though, is that if the suspect data were changed to be:

  • Acct#,034011234,Last,First

It could become meaningless because the transformation changed through simple re-ordering of data elements and the context may no longer be identifiable therefore leaving the data as data. There's a good chance, however, in this specific case that the context could be inferred. What happens if we eliminate the comma delimiters and just spew a line of text in the hope that it will be properly caught and processed?

  • Acct#034011234LastFirst

Here we have an example where Acct# and SSN have been concatenated and probably lose meaning outside of the process that knows to stop reading the Acct# field after X characters and read the next nine characters as the SSN. First and last names can be extremely difficult to distinguish without capitalization and/or localized knowledge of standard names. Michael Smith may mean nothing to a non-English speaker.

So what does this mean from a practical point of view? Without waxing philosophical, from an information security and protection standpoint, it is an extremely compelling reason to give serious consideration to Translucent Databases, which I will post about at a future point in time.