Saturday, June 30, 2012

Growing the Weavr feature-set: A story of algorithms and data visualization

After a comment by David Bausola, stating that any discussion of weavrs should include a conversation about the bot's feature-set, I went about looking for a way to optimize this creature given what I had previously discussed (i.e., emphasizing its machinic qualities). My initial direction was driven by two impulses. One, when I had first attempted to create a weavr, I had tried to 'upload' every possible conceptual connection into its little box of "What Words Define Who I Am?" Sadly, this approach failed miserably as there seemed to be some cap on the number of words I could input. Two, the weavr, in my mind, was a social tool that helps us simultaneously better understand our social mechanisms while decentrating those mechanisms away from their proto-human bias (e.g., intention, individuality, will, etc.). The goal not being to make 'Man' more like 'Machine,' but to obliterate the distinction altogether: sapiens machina.

Shortly after plunging into this exploration, I stumbled upon the work of Kevin Slavin, once again through CBC's Spark (179). It was a beautiful marriage. Slavin's discussion of 'algorithms as nature' only reemphasized my second motivating impulse: if something as stereotypically 'artificial' as an algorithm is in fact natural or a part of Nature--yes, the reified capital 'N,' Nature--then perhaps humanity had best reconceptualize the machine-human-nature relationship. That is, humanity's view of both 'Nature' and 'Machine' as diametrically opposed 'Others' is a serious hindrance to its self-conceptualization. It isolates the system from both its past (Nature) and future (Machine or hybrid) by making itself the indivisible divider: the Individual. Thus, for the very idea of 'algorithms as nature' to even make sense, one must decentralize their current conception qua relation to the 'world.'

In his Ted talk,  Slavin pushes this idea even further. Not only are  algorithms beginning to dominate on the internet, in economics, and even in culture, but they are producing in these domains in a way that is incomprehensible to humanity's current perceptions. When changes are thus occurring in the system, we cannot see them, predict them, or understand them. To use my own jargon: as the machinic level of organization gains ever increasing precedence, our need for an integrative approach, a language and framework, in which to operate in this domain is of the utmost importance.

With this problem in hand, I reapproached the discussion of the weavr's feature-set. If the weavr is a tool that can help our growing understanding of the machine-human-nature triad in the context of social mechanisms, it must in some way answer a particular question related to this problem. It must bring an implicit element of this relationship into an explicit manifestation. It must help us 'see.' However, I do not believe the weavr is so simplistic that it can only help us answer one question. Rather, it seems to have the flexibility to answer many depending on the context we impart to it. Thus, this context and its particular implementation is a key point. For the Encodingist, it is the point at which we breath 'life' into the organism. For the Interactivist, it is the framework in which an 'answer' will emerge and grow. And, in its current manifestation, it is more art (implicit) than science (explicit).

To anticipate a future critique: I have no problem with art and some might argue that art has explicit manifestations. It does. My point is just that the ambiguity of a gestalt does not offer the necessary formality needed to develop a framework for the language of the sapiens machina.

Perhaps, on the other hand, the distinction between science and art is a tad crude. Instead, it may be more accurate to say that the weavr needs more structure to better ground its implicit qualities. The goal, after all, when working at this level of abstraction and cross-domain integration (i.e., machine-human-nature) is to utilize the methods of both: to crystallize just enough of the raw potentialities that the underlying emergent qualities remain visible (i.e., do not stagnate in crystallization) but neither do they vanish in the vivid halo of a hallucination. The weavr is unique because it already sits at this intersection. This brings me to my second key point: the 'answer.'

Currently, we can gather a primitive answer by asking the growing weavr various questions, seeing what new emotions and their respective associations it has picked up, examining what it has stumbled upon on the web through its posts, etc. However, most of this is very distant from the original question as it was framed in the context we uploaded into the weavr's system. Words may repeat or appear. Certain vague patterns suggest themselves, but the creature is rather opaque. I lack a way to 'see' into the world of the weavr as it has changed through its interaction with its environment and self. Without giving it a series of questionnaires before and after certain time frames as I might a human, I am left in the dark. Thus, perhaps the construction of the weavr is overemphasizing the 'human' qualities of the system in order to demonstrate their ontology With this emphasis, it is easy to forget the beauty of the underlying formality of any digital structure. The architecture of the code and the digitized information offers the perfect entry point for further formalization qua  crystallization of emergent features. In less rigorously formalized systems like humans, this is much harder to accomplish. Thus, we must not forget the 'machine' part of the machine-human-nature interrelation.

On the other hand, it is possible to argue that the questions the weavr answers are not a part of the local system per se, but emerge through the way in which humanity interacts and responds to it a la Jon Ronson. I agree with this, perhaps, as a third point. However, even if this is the case, then there still needs to be a method in which we can examine these features. Perhaps a prosthetic that maps global patterns of behavior across different feature sets (e.g., other prosthetics). Perhaps one that allows a weavr to look through various search engines for reference to itself and comment on it. The more information that is mapped, the greater the number of relations, the better as far as I am concerned.

I choose to shirk the discussion of evaluation or quality because of the necessary context that must be presupposed. If we take information generically in whatever limited way we can, then sheer volume is beneficial as it allows for a multiplicity of contexts. It places less restriction on the questions that might be asked. In this case, more is better. But, this 'more' still requires a method of interpretation: a way to 'see.'

Data visualization is an area of study that fits perfectly into the framework in question. A Ted talk by David McCandless offers an excellent introduction and illustration of the field. In a nutshell, McCandless demonstrates that graphical metaphors offer an incredible way to condense massive amounts of data into comprehensible packages. It offers a clever method to make implicit phenomena blatantly explicit. It allows our natural processes to integrate our visually dominant, human-centric world with one of probabilities, statistics, and mass-scale data patterns: the machinic level of organization. Thus, data visualization or perhaps data design is the perfect language for the future sapiens machina.

One design framework or technique I commonly use to conceptualize abstract spaces, especially if they are highly interconnected or system-based, is an intersection of graph and set theory. I 'see' nodes and edges or bubbles and lines. Yet, I also see more than that. I 'see' sets of nodes and edges denoted by any type of pattern in the graph (e.g., a node, a cluster of nodes, a clusters of clusters of nodes, an edge, potential edges, external data sets/properties, etc.).

One way in which I could see an implementation of all of the ideas discussed here is through this framework. That is, rather than boxes or categories into which the user drops various words and still further categories that draw the relations between these words, give the user an elementary graph. Let them drop their words into bubbles and connect them via lines to other bubbles. When you want to add sophistication, allow the bubbles to represent clusters of bubbles (i.e., filter or restrict the current domain of inquiry). I like to think of this as collapsing dimensions in phase space. Then, as a second feature, allow the user to check back on this graph in future states and 'see' what new connections have developed or new nodes have appeared. Searching for self-references is but another way to further this map. And, allowing the maps to feed into one another (i.e., accessing another weavr's map through a node) would allow for even larger gradations of abstraction as you move up or down the ladder of organization (i.e., micro or macro patterns). In some way, I can see this as a development of a social semantic web.

In case this idea interests, there are a variety of python graph APIs that might be useful. This site has a link to a bunch of them. Not all have visualization features.

Pictures courtesy of:

Tuesday, June 26, 2012

Machinic: An illustration through Bausola's Weavr

Inspired by a podcast of the CBC radio show Spark, especially given my previous post, I have decided to expand the idea of machinic organization as it relates to David Bausola's weavr: a social bot. 

Nora of Spark starts her discussion with the observation that traffic on the internet is shifting from a predominantly user-based environment to one that is dominated by bots and their ilk. That is correct. Humans are no longer central in the framework of the internet. In some sense, we have been replaced by bots and not with a resulting decrease in quality. In fact, given that these bots aid in the creation of networks that support things like Google's search engine, one could argue that things have improved. A great deal of these bots are also maliciously related (e.g., spam bots) so one could argue in the other direction as well. It is a moot point whether this is for better or worse as it is happening regardless. The point is that not only was the persistence and functioning of the internet not tied to any particular individual, but it was not even tied to any particular system--where 'system' is used very abstractly to represent any organized assemblage of parts including humans and bots. Thus, the shift emphasizes or highlights what some have described as a decentration and one in which both humanity and individualism must be replaced by a framework of broader scope. In this ever changing landscape, the weavr comes to the fore.

The weavr is a strange bot that operates in social networks of various kinds. It defaults to having a blog and can be interfaced with Twitter. It dialogues, wanders around with Google maps, and draws associations between its 'interests,' 'emotions,' and various other associations (e.g., its position on Google maps and the position people are posting various materials from). On the broad side, it even dreams.

I simply adore this little creature and Bausola's emphasis on the idea of emergence is certainly invigorating. However, I recently had a shift that changes my relation to this idea as it relates to AI. The shift occurred with a thought that was similar to the following:

If a 'truly' novel intelligence on par with humanity were to be created, what benefit would come to humanity?

The thought stemmed from the work of Mark Bickhard. I briefly commented on a similar element of Bickhard's work in the latter part of this post. Briefly, Bickhard's critique of representation in the Encodingist framework results in the following dilemma: any construction qua representation that I overlay over this hypothetical 'super' AI must necessarily be separate from the functioning of that AI.

It is worth noting that I am neither being a phenomenologist nor an epiphenomenalist when I make the previous claim. It is an entirely different framework. To emphasize this, I will switch my use of the Encodingist 'representation' to the Interactivist 'anticipation' while relying, somewhat, on the common sense view of anticipation to get me through the analogy. The result follows: my anticipations of the AI are separate from the functioning of the AI much like my anticipations of other people are separate from the functioning of other people. The key point here is that my anticipations can be wrong and hence have to be separate. This "have to" is crude, but a more sophisticated discussion is beyond the scope of this post. I urge anyone who is interested to read Bickard's text for a much more detailed and eloquent rendition of the problem, especially Chapter 7, p. 55 and onward.

Given this framework, the 'super' AI, at the moment of its attaining this level of sophistication, becomes inherently 'Other' to me. But, this degree is just the beginning as even other humans can be 'Other.' There is also the difference of species: my framework of anticipations was built in interaction with other humans. And, though my association between the AI and humanity was originally justified in the construction of the 'AI as tool,' the 'AI as self-organizing/maintaining system' is outside of this domain. Thus, it warrants the description as 'truly alien.'

One could argue that we have at least elementary forms of such self-organizing systems and, as such, the transformation would not warrant the degree of 'Otherness' that I am proposing. However, I would argue that this is false. At most we have more sophisticated forms of 'AI as tool' that allow for modest degrees of self-organization in respect to a specific task. Simultaneously, then, I can push the requirements for this purely hypothetical 'super' AI further out by requiring that they posses the ability to be "recursively self-maintaining" a la Bickhard (p. 21).

Regardless, my point is that this transition of the AI from a tool to a functioning entity is not useful. First, the very idea that there could be a transformation is Encodingist: it is the magical transformation that takes a material substrate, 'AI as tool,' into the efficacious realm of a symbol manipulator, 'AI as individual or recursively self-organizing system.' In Interactivism, there is no such transformation. If anything is problematic, in the Interactivist context, it is systems that are more than locally self-organizing, which are more likely to be unwieldy. Thus, and to my second point,  I can only imagine how problematic it would be if I had to convince my calculator  to crunch numbers for me.

To return to the discussion of weavrs but in this new context, we can work towards a better conceptualization that does not have the Encodingist overtones of Bausola's infomorph while maintaining its machinic qualities. Weavrs are a new type of social tool. They emphasize the growing shift on the internet away from users and towards pseudo-autonomous functions by invading into a domain that previously excluded bots by definition: the social sphere. They also give us insight into some elements of how we as humans work, but not by possessing the individualistic properties of humans (e.g., intention). Rather, it is by showing that humans do not have those properties either.

This is the decentration. This is why it is called "machinic" organization. This is also probably why Jon Ronson got so upset about the weavr of the same name: not only does the weavr partially delegitimate the particular individual, it delegitimates all individuals even if just potentially (i.e., even if some future update may take it to this degree entirely but the current one is still too limited). Thus, I believe I can answer both Olivia Solon of Wired's question, "what do you think of weavrs?" and Bausola's question about what weavrs are in a single comment:

Weavrs are simply a tool for social exploration. But, by being such, they anticipate a time when all such exploration is relegated to their ilk. Through this anticipation they mark the end of humanity as organism... as 'system' par excellence and in place they speak of a time when a human-function is no more valuable than an AI-function and no less replaceable. This is of the utmost significance to both the study of AI and humanity as it relates to itself.

Pictures courtesy of:!/PixelNinjWeavr

Sunday, June 24, 2012

Augmented reality 2.0

It seems that I was previously mistaken. There are a number of groups that are implementing preliminary hardware and software for mobile and otherwise that is beginning to collapse the divide between our cyber and material worlds.

It's hard to comment on all of this stuff as I am a tad overwhelmed. Nevertheless, here are some preliminary thoughts.

For the games, I would like to see some gaming classics like PacMan get reinterpreted. It would also be nice to see someone explore the current limitations of our hardware so that we have an idea of where the boundary is in this domain. I think the explosion of excitement in this industry, though wonderful, had best be reined in. The domain is potentially infinite, but currently has some solid limitations in our hardware and software design. Rather than see programs and apps that really push the limits, I would like to see some slightly more modest but functional attempts. Take one facet of the infinite potential and develop it until it has some obvious practical usefulness. And, though I love these ideas (1 2), wearing a giant suit or helmet is... not feasible.

In this vein, I think I am following other thinkers. The importance of finding ways in which augmented reality, given its current limitations, can legitimately reinforce or support our current existence in our environments is pertinent. That is, given a camera, gps, a small screen or primitive glasses (1 2), an accelerometer and/or gyroscope, a live feed to the internet and the cloud, audio input/output, and a touch screen, what can be implemented?

The cloud offers the ability to extend our memory capacity if we can find a seamless way to 'recall' the information or begin to implement triggers (e.g., visual, auditory, etc.). If someone could find a way to integrate this with multilingualism (i.e., by having a list of common phrases, etc.), I think traveling would become much simpler.

The accelerometer and gyroscope offer an alternative to touch screens. Just hold your phone and swipe it around like a computer mouse. In fact, I'm surprised I have not seen any software implementing smartphones as UI for standard computers. Though, I have heard of other countries using it for UI more generically. And, actually, the idea of being able to graphically scan anything and have my smartphone find where I can buy it, virtually or otherwise, is certainly worth exploring.

This software has some interesting uses socially. I can see a time when people have clothing that encodes their 'avatar' (i.e., appearance) in cyberspace so that they are seen as such through AR. Subcultures, I'm sure, will be able to integrate this as an interesting form of identification. There are possible consequences here for gangs, etc.  as well as the police. On the extreme end, machines could be seen in a more anthropomorphic frame, which might have interesting repercussions for our relationships with them and we may be able to project our avatars to new places in order to send cues, or even interact with the world. Imagine your phone's 'ring tone' was the person appearing in your peripheral vision and waving at you. Interesting implications for the research I have heard on less invasive forms of human-computer interaction.

Actually, this would be an interesting way to implement some of the ideas I discussed previously. I could have it such that messages or social cues, interests, etc. are encoded right on my person (e.g., approachable, busy, single, etc.). If people start doing this with tattoos, that would certainly be fascinating. And, to forecast a future post, the ability to tag others given a particular network or context would be fascinating, too, especially with ratings.

Images courtesy of:

Friday, June 22, 2012

A comment on "The PR Crisis of Democracy"

It is incredibly refreshing to see someone challenge something as fundamental to North American culture(s) as democracy and Neuroskeptic did an excellent job of just that. In a nutshell, how can the growing discontent driven, in many ways, by our adoration of immensely reified and misused concepts like "Freedom," continue to be touted as the 'ideal.' As that link illustrates, I am not entirely sure I know what Freedom means anymore: differentiation to the point of emptiness.

I would liken the current uproars in various countries to the "terrible twos" of childhood where the budding personality, to the 'horror' of everyone, learns to say, "No." I draw this connection not to mitigate the effort and sacrifices made by these groups. But, rather, to highlight the conflict in a developmental sense: many of the ideals we hold are incredibly naive and incompatible with the way we live our lives. They are juvenile in the sense that they lack subtlety and tend to approach a problem in the most direct route (e.g., by creating a false, black-white dichotomy). That is, one could arguable describe a network, in an anthropomorphic sense, as juvenile if it lacks the appropriate complexity of connections necessary to approach a particular problem in a non-dichotomous framework. Democracy, then, is a cultural celebration of that era. And no, I'm not secretly suggesting a silent recapitulation theory here. The comparison is just interesting.

I have, in the past, drawn the association between cultures and individuals of varying age groups. That is, if, for example, countries were people then you can get some interesting implications on the basis of age alone.

India, China, Japan, etc. as older cultures might be the grandparents of the circumstances. Though some may be decrepit and need to be institutionalized, every once in a while they can toss a kernel of wisdom your way. At very least, they tend to be more patient and slower moving.

Europe, in contrast (and you can see my North American bias here as I lump all of Europe into one category), can be likened to an individual in their late 40's, maybe even into their late 50's. They're old enough to 'know better,' but still young enough to do unintelligent things. Arguably they are currently experiencing a mid-life crisis.

The U.S. is in the end of its rebellious teenage years: still vigorous and aggressive but starting to catch on to the subtleties of the world. Canada is the younger brother that cannot help but follow its older sibling's often terrible advice. Australia is the youngest and, due to their position and temperament, still think its 'play time.' Russia and Mexico are the ever budding bullies on the play ground.

And where does everyone else fall? Who knows. Many of the other cultures are the remains of ancient civilizations that have become so prolific that they are best taken as the ground of the rest of the world. A ground that is ripe to be raped, pillaged, and set on display at the whims of the 'family.'

Democracy, in this framework, is merely an appeal to the beauty of a freedom that was always hoped for, pseudo-achieved, and difficult to let go. Though this story is entirely implausible, a bad rendition of an Eikos Logos at best, it does illustrate something important here: self-regulation is a process that operates and/or needs to operate at all levels of organization and it is key to development. Development means change and even something as apparently permanent as democracy will come to an end.

In an effort to motivate a conceptual shift that operates more smoothly in non-democratic frameworks, I offer the idea of machinic organization (you have to read a bit to get to the reference). Note that, though elements of the ideas I will set forth may look like other familiar architectures like communism, totalitarianism, etc. They are not the same much like a pendulum is not the same for a clockmaker and a scientist.

The best description of what I mean by machinic and how it applies here is illustrated by William Gibson's Neuromancer. Though, the association is not my own. In this fictional universe, heads of corporations are regularly assassinated by other corporations, but to no consequence. As soon as one body dies, a new one takes its place and a database of memories are waiting to help reintegrate the new body with the role.

By suggesting the concept of machinic I am neither trying to perform an upward causal reduction (extreme top-down framework) nor am I trying to suggest that we will one day be some rendition of the Borg. A technological singularity, though plausible, is not what I am referring to. The goal is to illustrate that highly individualistic concepts in the standard Western context like freedom, prosperity, wealth, justice, progress become extremely unfamiliar when the framework shifts. As I said in a previous post that is particularly relevant here: when your individuality is no longer relevant and you assume the role of a statistic, all these concepts drop away.

I will reinforce the fact that this does not mean you are not free, prosperous, wealthy, just, or progressing (though this latter one is a tad odd to be combined with the rest by Neuroskeptic). It simply means that these concepts no longer have application in this domain (much like dividing by zero in most mathematics). Neither does it mean that one cannot examine these properties--this is not a moral or ethical discussion. It just means that in a machinic framework, the end of democracy makes sense. At the extreme point, it never existed in the first place (I'll evade the "problems with ontology" rant here).

Naturally, given that none of these activist groups are likely to cease their activities, even if they were to read my blog for some reason, the efficacy of this shift in framework is likely at question. But, the intention is not to change these behaviours in any practical sense. It is to begin adopting a framework where they simply no longer are relevant. I do not protest them. I do not suggest we ignore them. I am really not saying anything about them at all.

How do you destroy a product? You stop buying it. How do you stop buying it? You eliminate the ideology that justifies its use. Remember, people are irrelevant. One extremist group can and will be easily replaced by the next. But, if we change the ideology such that it removes the efficacy of that maneuver, then there is nothing to be gained. People simply will not even think about it. This is the goal.

Let's stop attacking the symptoms and acausally deligitimate the framework that non-linearly supports the problem.

Images courtesy of:

Thursday, June 21, 2012

A way to check your backlinks

For those of you who are not veterans of the blogging world and, doubly, who are interested in utilizing my alternative to comments, I offer you the following:

Just enter this into your browser and replace "YOURBLOGNAME" with the appropriate text to find the number of sites that have linked to your blog/site (courtesy of this site).

Either I or someone savvier than I should at some point integrate this into a gadget for Blogger. I don't recall seeing such a thing in the list last I checked and it shouldn't be a difficult implementation.

On a more detailed check, I found two neat implementations of this here and here. You have to scroll down a bit for the latter.

iWitness and tech acceleration part 2

This is a continuation of my last post; the gist being that this new app is a surprisingly close approximation of a project I was working on myself. However, it is missing some features that I would like to describe.

A word of warning: I lack an internal description of the app's functioning. Hence, some of these adjustments may not be applicable or may have already been implemented.

First, it would be useful to have a brief description of each and every client that could be sent to the police in addition to their GPS coordinates.

Second, if raw GPS data is being sent, one can implement a more detailed description of the clients whereabouts leaving less processing on the side of the police. A lot of the difficulties associated with E911 are related to the challenges in locating an individual when they are not calling from a land line. The result is that an operator must reroute you from a central hub in order to place your call to the appropriate PSAP (i.e., valuable time is being wasted and you have to be able to speak). Thus, by sending even raw GPS data, iWitness is minimizing the second factor, but increasing the first if the integration is not seamless (i.e., processing of the raw GPS coordinates must occur without forewarning). My solution to this problem was to use this site to determine the nearest street address as well as the Country of origin (necessary for global implementation), etc. The long-term goal was to be able to route to the PSAPs directly.

Third, it would be my hope that the producers of iWitness will be keeping track of the statistical usage of their app. All of that data related to when and where the app is armed and fired is likely to be crucial for future systems. These systems, like Y!kes, will be able to use positional data to warn the client that they are entering a 'redzone,' for what reasons (e.g., by implementing a feedback system for clients), perhaps auto-arming, and/or auto-warning friends, family, or parents that this individual is in a vulnerable location or that their app is armed.

Fourth, I can see this augmentation of the 911 service quickly transferring to other emergency services: hospitals and fire departments. Distributed processing and decentralization would become crucial in these frameworks as the user becomes the ever-present eyes, ears, and hands of these services. Virtually unlimited amounts of information about households and medical records could begin to be stored by the users and for the users. And, if one user fails to indicate a problem, any and everyone else will have the capacity to capture valuable forms of information for transmission to the relevant services and vice versa from the services to the user. Location mapping and hotspots will become useful sources for community objectives while communities themselves become stronger through cross-communication. At least, this is the potential.

There are potential issues here regarding the security of this data, but these infrastructure kinks are likely to have been problems all along (i.e., the whole system needs to be updated). Thus, the fact that they are inescapable in this new framework is in some ways beneficial in that solutions will be sought and the system will adapt.

Similarly, distributed, user based networks are still criticized in some circles (e.g., Wikipedia in most Universities). The possibility of imprecise or blatantly incorrect data is always present. However, in my mind, as long as the users (on both sides) know the limitations, these primitive infrastructures are still better than none. Plus, even centralized data entry must be checked and rechecked.

In sum, as technology becomes ever present in our lives, society and its members are going to have to become more technologically inclined overall. Limited coding knowledge and understanding of electronic architecture is going to need to be basic parlance. Protective mechanisms will be no different than modern day clothing. The fact that we can currently run around naked in cyberspace does not legitimate the behaviour.

Wednesday, June 20, 2012

iWitness and tech acceleration

To those of you who had heard of my idea to augment the 911 service through an app, it turns out that someone beat me to it. Meet the iWitness.

And I must say, the audio capturing idea is a clever replacement for my thoughts on emitting and capturing perpetrators mobile phone numbers via some complex effort on the part of the police, cell phone services, and hardware manufacturers.

It seems that the rate of technological growth is such that you can barely think an idea before it is created!

A longer post will follow when I'm not at work.

Pong and Predictions

When I saw this competition I could not help but laugh. Those of you associated with my Facebook account will recall this post:

I just got my first flash game up and running! Anyone who has a moment, check it out and send me your comments! It's a pong variant with a twist!

Looks like I (unintentionally) called that one, even if my rendition was primitive, to say the least.

Y!kes: Implications for augmented reality

The current take on mobile applications demonstrated through the upcoming software Y!kes is an inspiration for us all. Too long have we lived in Descartes' rendition of cyberspace--a fact that is all the stranger given the ever increasingly pervasive smartphone. How is it that so few people are capitalizing on the capacity for this conveniently sized, networked, computer to augment our day-to-day reality? The answer eludes me. Nevertheless, I think the move Y!kes is taking is a big one, and not just for Hotels.
What Y!kes seeks to do is transform your mobile phone into a networked passport for your evenings away from home. Elevators pick your floor; doors unlock and lock themselves; your bed and breakfast is prepped on arrival at the airport. Surely, this is just the beginning.

I personally see this idea expanding into life in general and, especially, social networking. When you walk into your local store, relevant advertising and specials will pop-up on your phone (e.g., Google, Facebook, Cloudmade). Hotels will start registering you upon entering their establishment. Restaurants will start seating you, give you the menu, and begin ordering your food. Your mobile will create a seamless flow of relation without all the standard wait times and administrative hiccups. And this is just via position. Start scanning audio and this ever expands the framework of anticipation. Add visual scan and augmentation and you are really rolling. Perhaps this may all seem far away, but at the current rate of technological advancement how long, really, is 'far away?'

In the social sphere, I can see networking sites starting to really utilize the presence of the smartphone as Y!kes has. People will be able to scan for live profiles in their immediate vicinity, which will or won't be visible depending on the networks you are associated with. Social interaction will be augmented as individuals will already know each others likes and dislikes, favourite topics of conversation, and more. They will be able to ping one another to invite conversation or be in an 'open' status where meetings are always welcome. They will be notified when friends are close by, assuming they are 'live.' Social reality will be augmented.

So what are you waiting for Developers? Let the technological race begin!

Tuesday, June 19, 2012

An emphasis on hardware

I would like to take this article as a jumping point.

It's interesting to run into an article of similar mind, if from a different position. The gist of the article being predictive: the recent advancements in software development in the current digital context is perhaps shifting advantageously towards hardware development. That is, micro development is now possible materially as well as virtually. My thoughts are an effort to move this development forward through a re-conceptualization of the environment, much as Brondmo did in the article, above.

It is my sense that the greater majority has largely forgotten about hardware. Software has become predominantly cross-platform while decent hardware comes in increasingly cheap package deals at your local tech shop (i.e., you don't even need to know what you're buying to run it). Yet, in place of this standard conception of hardware as 'the thing on which the software operates,' I would like to propose an alternative: materiality is just as equally infinite potential.

A few thought experiments.

First, take any old intersection with a traffic light during rush hour. Assuming two individuals knew of this intersection, a time to watch it, and had a means to control the traffic light, one could transform this innocuous location into a transmitter of information. That is, the cars themselves could become data packets, either ones and zeros or Morse code's dashes and dots.

Second, take any communication line (Ethernet, Coaxial, Fiber-Optic, etc.). In the standard conception, information goes into the line and comes out of the line with a bit of noise in the middle. This middle part seems particularly interesting to me. Is it possible to create information through the use of interference patterns in the actual signal? Perhaps the role could be that of a firewall. If the signal is sent incorrectly, the interference distorts it incoherently and, thus, it becomes noise. With the right signal, both the information being sent as well as the reception become coherent. Thus, in actuality, you are getting two things in one: encryption and decryption/firewall except at the hardware level.

Third, take, once again, a firewall, but a standard software-level firewall, in this case. This firewall, when it detects a malicious threat, assesses the degree of the threat, traces it, and disables its router at the hardware level. If this is too volatile a tech to give to consumers, then perhaps the firewall sends information to a centralized agency/computer that flags the threat, continues assessment and proceeds with physical dismissal of the router or a complete decommissioning of the computer.

The latter idea is likely dangerous, but it illustrates my point quite well: hardware is a virtually untapped domain given the new tools we have developed in the previous software era.

Monday, June 18, 2012

An end to privacy

Both Facebook and Google intermittently incite uproars in the public due to their potential breaches in any normal human's 'decency.' This never ceases to be a curiosity in my eyes. Do people honestly think we live in a world where privacy even remotely exists anymore?

We're filmed at work by our employers, filmed walking down the street by various satellites and stores, filmed by stoplight cameras, filmed by smart phones and digital cameras of various kinds. We have built in tracking services attached to our every phone call, land or wireless. Smartphones are walking tracking devices along with GPS. Our cars and laptops are tracked for security reasons. We tweet, post, or otherwise link our web services to our current locations. And this is just our position.

Ninety percent of stores have our address, phone number, name, and purchase history. Both of the giants we began the story with will store any and every last bit of information you'll give them. Every other website is making you sign some agreement that you never even took the time to read... (Yes, that was a lot of hyperbole!)

Let's face it, if someone, somewhere, doesn't know or have the potential to know the colour of the last pair of socks you wore, I would probably be surprised. If privacy is your goal in life, you're already bloody doomed.

But, there are two reasons why this is completely inconsequential.

1. The average individual, in the grand scheme of things, is completely irrelevant, expendable, and replaceable. We are just another statistic. A blip on a graph that can just as easily be replaced by the rest of the population that falls within a standard deviation of the mean. And, better yet, if you do happen to deviate from that population, you're even more irrelevant. Who would you market to? The strange person in the corner or everyone else?

For the sake of brevity, let's ignore the complexities of sub-populations and the leading edge of the popularity wave (which should be likened to becoming the next Michael Jackson). If you're one of those people, you probably don't even write your own social networking material, anyhow. That or you're savvy enough that this entire conversation is moot.

The point is... you can feel safe in your irrelevance.

"But what about those folks I know, ten times removed, who lost their entire livelihood thanks to..." I can hear you say.

The answer is obvious. We're statistics, right? A certain percentage of people are going to have bad things happen to them: the Canadian Goose that should not have arrogantly crossed the street.

"But if we had had our privacy..."

Oh no. You can't get out that way. Those privacy settings, as I tried to demonstrate above, don't do jack. You would need to drop out almost entirely or spend the greater part of your time hiding yourself. That is, to avoid getting hit by a car, you would have to stop crossing roads (e.g., RFID credit card fraud is just one demonstration that your information is already everywhere and vulnerable). Though some percentage of people will adopt this method, the greater majority will not. Surprising? Hopefully you're not afraid to cross the road...

But my point is not that people should stop shouting nor is it that your privacy settings should be at zero. It's that none of this has anything to do with 'privacy.' The reified construct hides more than it helps. Address the problems that come your way as best you can and then forget about them. But, name them as they are: "Facebook and Google have massive amounts of your information that is potentially vulnerable to an attack by malicious third parties." That's not a privacy issue. It's a software one.

On to the second point.

2. Personally tailored advertising is genius.

Let's face it. As a member of most Western societies, we buy stuff. We like to buy stuff. We spend a greater part of our lives enabling ourselves to buy stuff. If every single ad I ever saw from this point forward was perfectly tailored to inform me of the things most relevant to me that I could buy (Think of Amazon's, "other users who purchased this purchased..")... I would almost watch commercials instead of regular television. I don't think I am the only one that would feel this way. But, if we even accept that tailored advertising is not the next apocalypse, where do we stand on this issue?

I think most people do have a problem with people making money off of their actions, especially if they don't get a piece of the pie. I, certainly, often feel this way. However, I think we have the order of things reversed. It is not the case that Facebook and Google are providing you a service and then capitalizing on your use of the service. Rather, Facebook exists to make a (tidy) profit and then the user leeches off of this fact in such a way that they get services out of it (more likely this relationship is bidirectional but I choose to emphasize this direction due to the lack of current discussion on this perspective). The entire claim for "better privacy" is an illustration of this point: the companies want to continue making money, so they (sort of) oblige.

One could argue that at some initial point Facebook or Google was not really for profit. However, I would argue that this point is practically irrelevant and the proto-F and G were conceptually unrecognizable to the organizations of today. Additionally, these fledgeling companies would never have left the ground if they did not even have the potential for profit and, hence, investment.

Thus, in a nutshell, the user is an irrelevant leech with dedicated advertising.

Go social networking!

This post was partially inspired by Network.

A comment on "Zeno's Sound": Representation, nothing, and the shift to process

Hello folks,

I have decided to embody what I was describing previously about commenting via blog posts with links. The source of this current post, on which I am commenting, is a blog post titled "Zeno's Sound."

I am familiar with the author and, thus, this post is a continuation of an extended conversation we have been having. You, my fellow hypothetical readers, will now have the luxury of enjoying (or joining) it, given the shift to the current framework.

The issue I am having with the post has to do with the framework in which the author is operating. I would further like to contextualize by stating that I previously endorsed a variation of this framework, but had a recent turn due to the work of Mark Bickhard. Thus, this post will, simultaneously, be the first in a series of related topics to this recent turn.

I would explain the author's perspective as a particular rendition of the implications of such theorists as Alain Badiou, Gilles Deleuze, and other, similar continental philosophers. In fact, there is a related post that engages with Badiou's material directly. However, what is added in the discussion is the ties to music, sound, and art, more generally.

Before I continue, I should put a disclaimer:
I do not purport to know what the authors of any of these works are saying. I am not a student of continental philosophy, nor the respective authors. I am familiar with their works and have discussed many of their ideas with other students. But, that is the extent of my scholarly prowess. Thus, I am engaging with this material from a largely removed position as well as a different framework. The author of the work that I am commenting on, in this framework, is key to my ties to this literature base. However, part of my point in this comment is that I do not believe it is even possible to know what the authors are saying. I can hear people already cringing at this statement (another absolute relativist), but give me a moment to explain.

The framework that I am endorsing--which (potentially) remedies the initial spur for this comment--no longer accepts the proposition that symbols and/or information (including both these words as well as sound, etc.) encode and/or transmit anything. This theory, interactivism, denies encodingism of any form. Instead, one merely has their anticipations of future states as dictated by prior experiences with symbols, dialogue, etc. (this is an oversimplification but I am only going to peripherally engage with this idea for now...). The result, then, is that I can only comment on the previous experiences I have had with this material, largely through the author on whom I am commenting. Thus, if you have a rebuttal that runs very close to the text, you may be viewing an entirely different world from the 'same' set of symbols. I am always interested in such criticisms, but they may be missing my point entirely. I would kindly ask, given this, that one takes an initially agnostic position to the framework which I am endorsing: an external critique is inherently comparative and thus only peripherally relevant from an internal perspective.

Now, a particularly astute observer might notice that I am also making a comparative claim. You would be right. This is actually my point. I am introducing a new (i.e., vulnerable) form through a juxtaposition with the authors work via this comment. It requires some space in which to grow before it can clash into fully fledged bodies of knowledge with massive support bases.

To continue my comment...
The continental philosophers who the author is appealing to, in this new framework, are geniuses of the encodingist world view. That is, they addressed many of the inherent problems and contradictions created by the endorsement of encodingism through such fascinating concepts as "nothing" or the "null set." And, it turns out, that the author's conception of silence is closely related to this idea.

In sum, I would reduce (probably incorrectly) the author's points to the following:

(1st paragraph) Silence (or nothing) is nowhere or is not a thing.
(2nd paragraph) Silence is simultaneously everywhere and in everything.
(3rd paragraph) Representation ruins negative sound.
(4th paragraph) Representation ruins positive sound.
(5th paragraph) This problem is fundamental or it is not merely a matter of pragmatics.
(6th paragraph) Any discussion of sound has already lost the silence.

As my translation demonstrates, representation or encodingism is the issue and silence or nothing is only a minor, if particularly creative, palliative. What is needed is a different framework.

If one removes representation in place of anticipation one gets the following:
The digitization of sound and/or silence is merely a means to create anticipatory structures of what will occur in process when interaction occurs between the listener and the productive mechanism. It is a crystallized, symbolic foreshadowing in a highly complex anticipatory network. Thus, what Cage, Zeno, and the null set are pointing to is merely the limits of the current anticipations, limits which arguably no longer exist in systems which have integrated these paradoxes in a productively anticipatory fashion (i.e., which can utilize their predictions of these phenomena in their systems cohesively and usefully [i.e., to make further predictions]).

Interestingly, one can actually take the author's post to be the perfect embodiment qua illustration of this claim. That is, the author is illustrating how Cage, Zeno, and the null set are no longer limits since he can use their implications in a productive fashion as per the example of sound.
 I can, however, anticipate that they would oppose the null sets inclusion in this list as illustrated by the last two sentences of the 6th paragraph:

"... can there be a change of intensity of no sound? Unfortunately, or perhaps fortunately, there is not an answer to this within our range of hearing."

Silence is a symbol that is not a symbol. Again, a property that is not necessary if symbols don't contain anything. They have no content so all symbols are the null set--an oddly poignant point given the parallels in mathematics. One simply anticipates future numbers and, thus, the symbolic system entirely.

Another technicalicality: The pseudo-semantic web and a new hyperlink form

Alright folks, I'm about to introduce a new approach much like the commenting thing.

The idea, here, is to begin better integrating our digital micro worlds with that which is outside. When I state an idea or topic that is either specific or expansive (in that it leads to a plethora of other topics and research), I'm going to create a link that brings you to what I was referring to. The goal is to create a mental web of sorts and support the development of the Semantic web.

Now, this may seem like a daunting task, linking everything to everything, but I'm going to start with simple Google searches of what I'm looking for while narrowing down the idea to what I mean. This brings about a very interesting issue in form.

As soon as I tried to create the first link above, I realized the complexity of such a simple activity... I mean so many things! What is the best way to demonstrate them. Or, more specifically, how the bloody hell does one pick a single page? The implications of that choice are so great. You are going to bolster the importance of that website and potentially set up a dialogue (remember, oh consumers). You are potentially going to forever impact an individuals first encounter with an idea. That is, it's a big responsibility.

To fix this, I first thought that I should only link to Google searches so that the individual might decide. The problem with this is that I'm not benefiting other productive members of the internet nor am I specifically linking in a direct way (i.e., it is too open). Next I thought that Wikipedia could hoard all my links, but that, too, suffered from a similar problem. Following that, it occurred to me that I might be all academic and just link to scholarly papers and original sources, but this was far too constricting.

A light bulb at this point flashed: perhaps the way we link is too old-school. I want a link that explodes me into a framework of ideas that are hierarchically organized (or prioritized) relative to their apparent proximity to the idea. A visual Google search a la graph theory or mental web, but where the author determines the priority of the listing.

This idea is actually reminiscent of some work I had done previously where mental maps are illustrated in the framework of dynamic systems through the use of attractor basins.

Here's an image I stole from a (sort of) paper I wrote for an independent study.

Though I could riff on this topic forever (foreshadowing?), I will continue.

What I realized was that, until such a search should be created, I am going to have to find another means. My temporary solution was to create reference numbers of links after each point (e.g., (1 2 3)) and make them hierarchically organized. Though, I may still implement this, given that I currently explained that the first link is the most relevant, I initially had a problem: by creating multiple links I make the task of further researching appear more daunting and thus perturb people in such a way that they are potentially less likely to follow the links! Multiplicity is simultaneously dispersion.

In conclusion, for the most part, I have decided to create links to the best possible option (with possible occasional implementations of the best few options when I deem the time appropriate). On the basis of whatever wisdom I have combined with sheer impulsivity, I will choose just one.

May you forever enjoy the never ending web of relations that is the internet of the future now.

A method for posting comments

Hello hypothetical readers,

A (possibly) innovative way of posting comments to my blog presented itself in place of the usual forms. My idea is to urge you to create a blog of your own--a surprisingly easy task--by whatever means you see fit (though, I must say Google has done a wonderful job with this blog platform...). In this new private blog of your own you can create a comment to one of my posts and then link to it. I will then see your link (after you or someone else clicks it) in my stats and, assuming it is at all relevant or interesting, will likely respond with a comment and link in turn.

I like this idea for a few reasons. First, it reminds me of the old school process of writing letters, journal articles, or books in such a way that they create a dialogue among peers, but in a contemporary context. Second, or furthermore, blogs of relevant dialogue will begin to create a mutually supporting network or interface of connections, readers, and topics. Third, everyone's blog is still independent and can be edited, adapted, etc. at their own discretion, which leads to... Fourth, there's a sort of evolutionary progression as the useless bloggers (and those that would issue spam, etc.) are filtered out from the network as well as from each others individual blogs. I should also add... Fifth, down the road (or now), a bit of relevant advertising could help sustain our dialogue (and bellies).

Now, the wisest among you might say... "But Mike, this is already the role of blogs and the internet at large!" And I most heartily agree. However, the difference and distinction (despite, obviously, removing a comment box) is that we (or at least I) are (am) now consciously and deliberately operating in this framework. I comment so as to create links to the people I deem significant as well as the networks they support and others do the same for me.

Thus, I must add, O Bloggers, by commenting (i.e., linking) you are buying, my fellow consumers. Not necessarily a product you can set on your dresser, eat, or otherwise materially enjoy. No. You are buying future posts, peers, and networks of the same quality.

Buyer beware.

Digital warped worlds and the death of truth approximations

It occurred to me somewhat recently that we (that is the royal 'we' or, at very least, I) have been going about this whole thing (whatever it is) terribly wrong. That is, I must beg the question:

"Why in gods name are we trying to approximate 'reality' or the 'truth'?"

I think video games are, in this case, an excellent example. Though things have been changing thanks to the plenitude of indie game companies exploding in the mobile and smaller game scenes, the current plunge of mainstream gaming to ever better approximate reality is bloody absurd. I don't want to imitate what I could go outside and enjoy in a way that is inevitably better. True, the whole "infinite lives" thing is certainly advantageous, but that is, actually, my point exactly. It's not real!!

Thus, I would like to offer a possible idea...
Let's create a first person shooter a la Escher.

Here's a quick picture I drew up in paint (so that one might visualize the strangeness). I will then proceed to explain...

There are basically three ideas being illustrated here. First, and most obviously, the world is like PacMan. When you walk to the end of the hall, you end up back at the beginning. A particularly creative game might be able to capitalize on this idea by making bullets persist and, thus, loop around (i.e., a past bullet becomes a future liability). The game now truly capitalizes on the element of time in this advanced framework (i.e., you are playing against past and future selves).

The second detail is that there are two pathways in the middle of the hall that loop you to opposite ends (i.e., the top might loop to the left entrance and the bottom to the right). However, as I attempted to show with the colours, the loop results in a physical rotation of the playing field. For the characters that move through the entrance (or door as I named it), what was previously the wall is now the floor, and the world rotates accordingly. If the walls and ground all have cover, this adds an additional odd element.

Third, the ceiling of the game is conceptually sandwiched oddly as I attempted to show in the little pic on the bottom right. That is, even though the hall is perceived as straight to the player, if one were to look up they would see down onto a section of the hall that is either behind them or in front of them. Thus, the first section will see down onto the second section and vice versa while the third section will see down onto the fourth section and vice versa, and so on and so forth. This idea is especially interesting when combined with the 'doors,' as when someone steps through the door this property suddenly becomes a part of the walls. I can only imagine what that would look like.

The last thing I would like to say about this idea is that the doors, loops, and ceiling should not be portals. Rather, the graphics, ideally, should fuse into one smooth scene (i.e., the hall looks like it never ends (barring interferences of cover and the repetition of the player(s), the ceilings just look like the next section is stacked upside down above them, and the ground of the doors smoothly fuses with the wall of the front or end of the tunnel).

Now that is an fps for the record books!

An interesting extension of this would be to create game maps with mathematical oddities like Mobius strips, Klein bottles, 4D or higher dimensional cubes, etc. The math for these constructs already exists, hence implementing it in a game should be feasible. Jamming 3D graphics into their context would likely just result in odd things, but that is the point.

If anyone wants any help trying to implement any of these ideas, send me an email. My coding and math skills are not fantastic, but I am always willing to learn and persevere.

Sunday, June 17, 2012

A little about me...

(though this will probably end up on my google+ profile at some point)

I am a recent graduate of the undergraduate Psychology program at York University and I will be beginning my masters at Carleton in the fall in Cognitive Science. My research, in the past, has been an interesting combination of the work of Thomas Kuhn, Imre Lakatos and similar theoreticians with the formal theory of dynamic systems (or chaos). My goal for some time has been to find a plausible way in which to approach the task of modelling or formalizing the mind that evades the problems of the contemporary structuralist--post-structuralist dialectic. That is, I have attempted to answer the following riddle:

How does one have a completely relative, post-structuralist framework without throwing structuralism "out with the bath water." Or, what role does formalism play when it no longer allows access to the 'truth,' 'infallibility,' 'power,' or any other hierarchical maneuver.

The best answer I have so far is that it may offer productive anticipations whose viability is gathered purely on the basis of persistence. Less opaquely, formality builds (mental) structures in a way that is useful.

My interests, outside of my research, are somewhat diverse. They largely involve Psychology, Philosophy, formal theory of some kind (e.g., mathematics), or technology/computers. You could also run into some music related things, fiction, and art on this blog.



Alright folks... This is the beginning of the end as they say.

As an initial post, I just want to say hello to the blogosphere.  And, I would like to say that, in the short term, I am going to delay on involving comments or ads. If times should change and my readers reach a few hundred, I will opt for a re-negotiation of these terms. In the meanwhile, it will be a bit of a solipsistic narrative. The hope, again, being that a tidbit here or there might inspire new thoughts and actualities. I will post an email at some point if someone should wish to pursue one of these potential projects with me.