Take a picture of this pattern with a smart phone and obtain an optically pixelated representation of the image which may at some later date provide a reminder of the color and glyph combinations as well as subjective data to be distributed via social networking tools, email transference, sms messaging or speech-based descriptions. This technology might be applied in a verbal interaction between two linguo-compatible bipedal voice-enabled anthropoids as in this sample exchange: “Homes, did you see the new House Industries Alphabet QR print?” “Why yes. Dude that was bad-assed!” Such a versatile dialogue could take place in line-of-sight proximity or transmitted in digital or conventional copper-borne analog format as long as the sound waves were transliterated into decodable aural streams at both termination points. With the help of interaction and interpretation this image can be distilled and agglomerated through text- or burst-based social labyrinths such as Twitter where humanoid cerebral processing transmutates the pictograph into 140 human-readable graphemes that may become obfuscated with desultory comments and/or first world grievances. For example “#beauty dig new @houseindustries prints omg @pinkberry put gelatin emulsifier in my glutenfree #wheatgrass smoothie sundae #epicveggiefail” can be interpreted as a valid string if truncated to the first 44 character positions. However, personal information aggregators have been found to inconsistently decode aesthetic protocols contained in such illustrative assemblies. For example, Facebook users may interpret the ocular input of the alphanumerosymbolic code contained in this pattern and apply a “Like” criteria while others may not apply any criteria at all, creating an inconclusive evaluation of the image and therefore an incomplete appraisal of the visual array.