Idag publicerar jag och ett antal vänner (“sju it- och mediedebattörer”) en text om transaktionstransparens, (något vi ser som eftersträvansvärt) på DN debatt. Detta i motsats till den informationsimbalans vi ser uppstår mellan informationsintensiva organisationer och de individer som har med dem att göra som kunder eller som medborgare. Vi ser att individer inte ser värdet på de data delar med sig, men inte heller har möjlighet att bedöma det värdet utan att få tillgång till liknande verktyg och liknande mängder data som organisationen själv har. Detta kommer inte att ske. Som motvikt till detta föreslår vi att företag och organisation som värdesätter information om sina kunder eller andra individer de interagerar med anger det värde de anser den informationen har som en del av sin ekonomiska redovisning. Det ger kunder möjlighet att bedöma vilket värde de data de delat med sig av har.
This coming academic year of 2017-18 I will be at Stanford University, at its Department of Linguistics. I am looking forward to tugging at some of the most interesting loose ends from the past few years of technology development at Gavagai in the hope of finding interesting seams to work!
Professor Martin Kay, who hosts my visit, took me in for an internship at Xerox PARC in 1991. Now he will be again pointing out the best directions to develop.
Today I had the pleasure to witness the public defense of Stanley Greenstein’s Ph D dissertation on legal implications of predictive modelling: “Our humanity exposed — Predictive modelling in a legal context” for which I was a co-supervisor on technical matters.
In his dissertation, Stanley gives an inventory of several legal frameworks which might be relevant for the effects predictive modelling might have on an individual. He discusses the risk of “potential harm” — harms which an individual might not even be aware have occurred, such as a somewhat higher interest rate or insurance payment, or not being selected for a job. He examines how European regulations on data protection and human rights are applicable to understanding such harms, and focusses on the target notion of “empowerment” as a legal concept to address the information imbalance between large organisations and individuals.
Learning to Generate Reviews and Discovering Sentiment. Alec Radford, Rafal Jozefowicz, Ilya Sutskever. https://arxiv.org/abs/1704.01444
In this paper (apparently only published thru arxiv, so not carefully reviewed by anyone just yet) the authors present an intriguing result. They build a neural-inspired model (LSTM, a fairly standard one) which predicts the next byte in a text, given the ones it already has seen. They train the model on product reviews, and then use it as an input to a simple classifier. The model, in spite of being trained on characters, does very well (better than many standard lexical models, e.g.) on classifying sentiment of product reviews! The authors even find (to their own delight) an indicator cell specifically for sentiment, and show how it tracks sentiment along the progression of the text. This may seem strange, but actually there is a fairly reasonable hypothesis to explain the result: there is more to sentiment than lexical resources can model. This model appears to capture signal which is encoded in something more than the sequence of words.
In general, coercing most everything about language into lexical models (as recent results have done) is fixing the representation on one analysis level which happens to be accessible due to the nature of our writing system. Breaking this strong binding is probably a good idea.
Deese, James. 1962. On the structure of associative meaning. Psychological Review, Vol 69(3), 161-175. http://dx.doi.org/10.1037/h0045842
Deese, who has published extensively on association norms and the methodology of eliciting associations, discusses here what sort of relation the terms in associative pairs might have. Most of the paper notes that the methodology heretofore has been faulty, and Deese’s contribution is to introduce frequency well into the model. He also discusses associative relations in terms of replaceability and combinability and in the assymmetry between items on different hyponymi levels.
Deese posits (referring to previous work by Woodworth, Ebbinghaus, and Galton, to which I expect I might return further on) that the associative relation between elicited terms is not one of meaning in the way meaning usually is understood. (Woodworth per Deese, classifies (grades, probably) words both by meaning and meaningfulness. This needs to be looked up properly). The associative relation is not as readily mappable on known lexicogrammatic relations.
I will try to make notes of interesting papers I read from now on. Instead of scribbling in the margins of other papers, napkins, and post-it notes, I will scribble here.
This morning, I gave a presentation to the workshop on
Supporting Complex Search Tasks on how we at Gavagai handle complex information needs. Mostly I claimed three things:
- Complexity is not necessarily in the formulation of the information need. Most of our customers perceive themselves as having simple information needs. Or at least those needs are simple to formulate in informal language. We believe an information system should accommodate this, and if needs indeed are complex or change, allow simple and painless reformulation.
- Greatest challenge is in attention to new information — introducing new information aggregation tools will add business complexity, not reduce it.
- Evaluation of information systems in the way it is done in academia is good for assessing progress on the cutting edge. Industry has greater need for establishing Best Practice guidelines and in satisficing technology needs than in optimising them.
Slides for my talk on Complex aspects of seemingly simple information needs.
In interesting discussions after the initial presentations, the workshop discussed the need for a quality assessment of data collection methodology. We expect to suggest such a procedure for next year’s edition of this workshop.