Category: Essay

  • Y Bother: The X-Risk of Artistic Creation

    “Humanity is acquiring all the right technology for all the wrong reasons.”

    – R. Buckminster Fuller

    *Disclaimer: when I use the term AI, I’m not referring to just Strong AI or AGI, but the other “pre-AI” automation or ML models that exist today as well. The “concept of AI” if you will. I’m not an AI researcher or even a competent enthusiast, so don’t take my knowledge of the technology as comprehensive.*

    While riding the waves of the future’s existential crises, there’s more than enough to be worried about. There’s likely equally enough to be optimistic about as well, though my brain (despite various training attempts) tends to look for the problems, so that I may identify–or at least be aware of–potential solutions. I spend way too much time online cutting through the brush with my query machete, attempting to find the “true” answers to my debacles. Unfortunately for me, I always come up short due to the whole “you can’t predict the future” thing. I don’t want to predict the future, I would just like more general assurance of some larger-scale concepts.

    In the field of artificial intelligence, X-risk (short for existential risk) is defined as a risk that poses astronomically large negative consequences for humanity, such as human extinction or permanent global totalitarianism. There’s a ton of writing on this subject, and I don’t want to go into that here since it’s deeply researched, and many people are working on it, and every company that’s trying to develop artificial general intelligence has departments on safety and risk–arguably the most important part of AI research. I want to talk about how AI is and will continue to affect us creatively — the more specific “micro” creative risks of the future of AI — where the line should be on development and the why behind some of the work in the field towards an artistic AI.

    As a layman, it’s been my understanding that the purpose of AI is to alleviate the more tedious, time-consuming drudgery that humans either aren’t good at or dislike doing in their jobs. As someone who spends time on the tech-focused parts of the internet, I’ve seen attempts to create ML models that not only take over the monotonous but the enjoyable as well. A model is an algorithm that automatically improves through experience and the use of data. You teach a model on what’s known as training data so that it can make decisions and predictions without explicitly being programmed to do so. While I understand we haven’t achieved “true AI ”, models like DALL-E, a version of GPT-3 (one of the most cutting edge language models),  present some problems that should and are being discussed in the community.

    DALL-E generating images from a prompt – OpenAi

    Just because we can doesn’t mean we should

    The technology industry at large has a habit of trying to solve everything using technology and tends to focus on the wrong problems. Tech is trying to “solve” aging when they should be focusing on solving premature unnecessary death. Many other things the industry sets out to solve would be better tackled by reworking existing systems, or simple communication. We don’t need a new productivity tool or a social media app, we need understanding, civility, and community. Technology isn’t the problem, it’s greed. The problems we face as a society are complex and systemic, where technology should be used as a seasoning rather than the main ingredient in many of our recipes of progress. Even if we can create true AI, does that mean we should?

    In the design and product industry, good products are built and designed by asking “Why?” If the product has no purpose, or the design doesn’t solve the problems of the intended audience, then why are we building it? We should ask the same of AI. What is the goal of AI? What are we striving for? We have enormous problems to face as a global society, politically, environmentally, etc. AI may be able to help with some of these issues, but not everyone is working on solving these problems. Many showcases of AI online are in a creative capacity. An AI that writes a story, paints a picture or creates a song. At a lower level, it’s a novelty and fun for all and may be useful to commercial artists, at a higher level it becomes a meaning crisis.

    We can’t talk about micro creative X-risks of AI without talking about the macro as well. As the technology quickly gets better, how long before it can write a coherent novel, paint a true work of art, or write the next hit song? Where is the line when working on this type of technology? Most of the reasoning behind this work is to help humans with their creative work, not take it over. A partner rather than a competitor. That would be nice if that’s where it ends. A technology that helps with writer’s block, great. A technology that helps brainstorm ideas, great. But if the technology exists, what’s to stop people from going further and having AI do it all on its own? True, it will raise the bar for creatives. If it’s easier for anyone to create a bad romance novel, then the true romance novelist will have to work harder to make better work. That’s a good thing, but for many consumers will they care?

    Many consumers of art – whether it’s visual, written, or auditory, want the human relationship that comes from being the audience. That’s where much of the enjoyment of good art comes from, knowing that someone else in the world created it, and is connecting with you through their art, knowing it’s from a fellow ape. If mindless consumption is the goal then it’s not really related at all, right? Let the floodgates of crap open. If 99% of things are “bad” anyway, what’s the difference if it becomes 99.8%?

    Portaits from Artbreeder

    The future of art – process over product?

    I believe that there will probably be three categories of art in the future – human-made, human + machine-made, and machine-made. What does machine-made art look like? From generative artists of today, it’s surreal and abstract, while the models like Artbreeder and DALL-E create more concrete examples. Maybe in the future, there will be an Etsy just for human-made products if non-artists get the same idea of trying to take the easy road to make money with little effort. I think without question there will be AI-augmented work, even something as simple as Grammarly can constitute as an “augmented experience.” But I’m of the hope that the industries that value the art they create, will focus on human-made and augmented work rather than just machine-made. Machines may make “art” but they cannot make your art (on their own). There will still be a reason to create, to get your music, thoughts, and visuals into the world.

    Even though computers are better at Go or Chess, people still play. When it comes to art, the goal isn’t to be the best at something, it’s the act of creation and connection. So does it matter if someone else can paint “better” than you? Will you stop painting because you aren’t the “best”? Maybe AI will just be another factor in that decision, if someone or something is better than you at doing something, why do it? Your art must come from you and your process, art is not just the product you create. If you can push a button and a painting appears, even if it’s technically good, who cares? It’s still not yours. AI is understood to be good at delivering a correct result, rather than an interesting process. As humans, how and why we get from A to B is just as important as what B actually is. We value the process over the product, especially in art, which the AI art of today lacks.

    Famous scene from I, Robot (2004)

    Hold the line

    Keep in mind that we don’t need to achieve AI or AGI to induce these fears. Even today’s models like GPT-3, Google’s T5, or what GPT-4 or 5 might be capable of, are still causes for concern in many situations. One such example is this generative AI art model created by an artist called Rivers Have Wings, where the model can attempt to mimic the style of another artist. Other examples include this Twitter account, where users simply name a concept, then the AI creates a digital work based on the suggestion. But when you view completely AI-generated art, it feels empty. To me, it feels uneasy and a bit disturbing, like you’re not supposed to be looking at it. Maybe that’s the inherent attraction some find to these pieces, that there’s something humans aren’t meant to understand. Maybe I’m reading too much into it and it’s simply a novelty. AI artist Mario Klingemann, also known as Quasimondo, believes this is nothing more than “a storm in a teacup.”

    “They create instant gratification even if you have no deeper knowledge of how they work and how to control them, they currently attract charlatans and attention seekers who ride on that novelty wave,” Klingemann says.

    So what’s the right amount of concern to have these days? If you talk to Joe Schmoe off the street, he probably has one of the two concerns -he isn’t worried at all because he has no pulse on the technology, or he thinks AI will take over the world and make us redundant. If you go online to the Twitter or Reddit communities related to AI, you’ll find the same. Many think “it’s overhyped, interesting stuff but don’t worry for a long time” or they think “the singularity is near and we’re on our way out, might as well just give up now.” Some think we will see large disruptions from true AI in our lifetimes, others think it’s a pipe-dream and we may never get past Turing completion.

    For example, this Reddit answer to the question: Can art that has been created by Artificial Intelligence be considered as “real art” or not?

    “I’d tend to lean towards no. Art could broadly be defined as an expression of the human condition/experience rendered through a specific medium. So I’d say the output of an AI wouldn’t be defined as art. Though defining art definitely is almost impossible so it’s hard to come to a conclusion.”

    And another comment about the future of creativity and art which gives a more bleak and depressing scenario:

    “I would argue that the end of art is to arrive in our lifetime. 

    First for specialized narrow AI segments, and eventually all creativity, as nobody will be able to outperform the synthetic creative machines that will draw from more inputs and more feedback knowledge and data than a human brain ever could.

    Music, drawing, painting, and creative writing will be outperformed by ML and the consumer will be better understood, predicted, and manipulated than by any human once could.

    At some point, humanity will encounter the great stall, where you no longer feel like stepping into the arena and competing. When the machines will out-draw, out-create, and out-sell your wares, you will have to step out of the arena and submit to the loss of your creative self never being able to come close.

    There will be a time when the natural artist vs. the hybrid artist will co-exist, but that era will come to close too.”

    Sounds great, sign me up.

    An article written by Cem Dilmegani, the founder of an AI-centric data company, condenses the results of major surveys of AI researchers showing that when asked, “Will singularity ever happen?” Most AI experts surveyed say, yes. When asked, “When will the singularity happen?” Most answered, “Before the end of the century.” This is concerning, even when we look at the past when experts predicted certain checkpoints in the development of AI will be reached already which haven’t been realized. Again, it seems nobody can predict the future, yet.

    I’ve seen many online describe a future “utopia” where we’ve achieved AGI, solved all our problems, and humans are free to do whatever they want all day while we reap the benefits of a UBI. Entertainment is created on the fly and customized for our states of mind. The problem is–this is highly unlikely, skips over us solving global problems such as climate change, and with nothing but free time and no desire to progress–we would truly be redundant. And if anything we can do AI can do better, what’s the point of doing anything then? Inherent meaning, for the sake of it? Is that good enough to keep all of humanity going?

    Tell it to me straight

    It’s very difficult to get a straight answer on this since what I assume would be the best way to find out what the closest correct level of concern is, is to call a safety researcher at OpenAI or Deepmind and ask. I suspect even amongst that group there are different levels of concern. Many are probably techno-optimists excited for the future and have quenched their thirst from the AI kool-aid.

    AI research suffers from being abstract and technical, making it difficult for those outside the field to really know what’s going on or how to feel about it. I don’t want to sound like an anti-tech Luddite, because I do believe the technology being worked on can add a lot of value to our world, and of course, the cat’s out of the bag with AI already, but I want to prevent it from going over the line of making parts of our lives too automatic and making us unnecessary. I work in the technology industry, but I’m a big believer in balance and wish we were working towards technology where it’s needed, not where it’s negatively disruptive. It’s tiring to fight ourselves for the greater good against the problems, situations, and technology we’ve created.

    By combining my hopes with reality, I’m still wanting to find the “truth”, but it feels like we won’t know until we know, you know? Life is short, big changes take time, so maybe I’ll find peace in focusing on what’s important and crossing bridges when necessary. Wouldn’t that be nice? It’ll be interesting to see how the next few years play out with the development and application of this technology, though I still believe we should keep some things sacred. If I’m totally off base on these thoughts let me know, I’ll actually be relieved if there’s “not as much to worry about” and I can just move on to my next existential crisis sans AI.

  • Division by Zero

    “A candle loses nothing by lighting another candle”

    James Keller

    For whatever reason, many times in my life I’ve had a bad case of “Keeping up with the Joneses.” Call it growing up in white suburbia, being a neurotic millennial, or living in the digital age of vanity metrics, but there’s always been someone to compare myself to who’s got it all figured out. On a logical level, I understand that I can’t compare myself to anyone but me, who I was yesterday in contrast with who I want to become tomorrow. However, on an emotional level, it’s a bit of a different story.

    Tell me if the following sounds familiar: You’ve finished work for the day and have a to-do list of your own planned for some personal projects you’d like to make regular progress on. Maybe you want to become better at drawing, so you’ve decided to crack open an Andrew Loomis book and start having fun. Before doing so, you decide to hop on Instagram or Twitter to relive your FOMO, and start scrolling.

    Before long, you see that someone has posted a photo of their new painting and is getting rave reviews in the comments. It looks phenomenal, the brushstrokes, lighting, and values are all just right. You start to feel nervous, imagining what it would take for you to get to that level. You’re not ready to post your artwork online, you feel like you’ll never be that good! You feel behind in the drawing world because in your view, someone in your greater network making progress on their own somehow means you’ve made less progress.

    If the above situation sounds slightly silly, it’s supposed to. It doesn’t seem silly when we’re in the middle of such an experience, but when we talk about it to ourselves or others later, it sounds almost ridiculous. This is the danger of unnecessary comparison and zero-sum thinking (plus some perfectionism peppered in).

    Most things in life do not have a health bar hovering above them, depleting each time someone does something that you don’t. Just because you publish a new edition of your newsletter, that doesn’t mean I’m one blog post short in the eyes of the public. One can love their children the same amount. Loving more than one child doesn’t mean each child receives less love.

    Regardless of privilege, someone is always going to have things “better” than you and someone is always going to have things “worse” than you. It’s all about perspective and what you value, of course, but this old adage does little to ease my mind about comparing myself to others based solely on outward situations. I can’t help but wonder if there’s a better way to think about this.

    “Look to the cookie”

    Zero-sum thinking is typically created by “black and white thinking”, a defense mechanism also known as “all or nothing” thinking or “splitting”, where we tend to think in extremes without a middle ground. Imagine a student, who feels that their only two options are either to get good grades or drop out of school. They fail to see the myriad of other options between the two, namely the “gray area”; we do this to ourselves in the places of our lives where it’s least useful.

    In game and economic theory, a zero-sum game is a situation in which each participant’s gain or loss of utility is exactly balanced by the losses or gains of utility of the other participants. Along with the economic situation, this concept goes further to thought. We experience “zero-sum thinking” where we perceive situations as zero-sum games, feeling like someone’s gain is another’s loss. In other words, you gain, I lose, and the net change is zero.

    In a zero-sum game, a player never benefits from communicating her strategy to her opponent. This isn’t how most of life works. For example, giving everyone equal rights doesn’t mean that each person has fewer rights. In this respect, life is not a seesaw where if one person goes up the other goes down, both can be at the same level off the ground.

    Naturally, there are situations where zero-sum games are quintessential, such as sports or sharing a dessert. In sports, someone has to win and someone has to lose, and as we’re all aware, for your partner to have a slice of cake, it means one less slice for us to enjoy. On the contrary, buying or selling something is not an example since both parties gain from the situation. One side makes money while the other gets whatever they paid for.

    There’s a fallacy in economics called the lump of labor fallacy, in which the general idea is that there is a fixed amount of wealth in the world or a fixed amount of work to be done in an economy. I feel like this is related to zero-sum because it follows the same pattern of thought that something is limited when in reality it isn’t. There probably isn’t an infinite amount of work to be done in an economy, but there definitely isn’t too little.

    We need to go deeper

    Why do we have a tendency to think this way? It appears there are both immediate and evolutionary causes for us to fall into the trap, and they aren’t always apparent to us, so we need to dig a little deeper.

    For an immediate cause, we can point to a person’s individual development, their experience they have with resource allocation, and their worldview. We can speculate that if one had to protect their food from their ravenous siblings or else there wouldn’t be enough for them to eat while they were growing up, maybe they’re more likely to view situations as zero-sum when they aren’t. People growing up on opposite sides of the socioeconomic spectrum may both be more likely to go down this line of thinking, that when one side gains something it means the other has to lose it. Regardless of the factual situations where the rich take from the poor, this is just an example to illustrate possible susceptibility to think in a zero-sum manner.

    From an evolution perspective, we can look to our Neolithic ancestors, where there was fierce competition for scarce resources. The development of technology was slow, unlike today, and there was no incentive for humans to understand economic growth. That became the default way we think, which then has to be unlearned. We feel entitled to a certain share of a resource, and today this turns into unnecessary competition between us on things that we don’t need to compete for.

    I also don’t want to blame social media, but truthfully, I believe it does play a large role when it comes to our tendency for zero-sum thinking. If you remove the negative stimuli from your life, then it becomes more difficult to experience the negative thought patterns that it helps to cause. On the other hand, social media can be amazingly helpful in finding like-minded individuals, learning, and inspiration. 

    At the end of the day, like most things, responsible social media use is a balancing act. Some of us are naturally “always online” and have no problem spending much of our free time on social media, some even make their living from social media. For those of us with a tendency to slip into some bad habits when we spend too much time on social media, maybe it’s worth looking into a more balanced social media diet.

    It’s all relative

    Going down the path of zero-sum thinking can be a relativity trap. We’re frequently looking to others to see their progress in a game that’s unwinnable. We’re playing games of chicken, which is to say, a lot of work trying to pinpoint someone else’s journey when we should focus on our own. We tend to anchor ourselves on the wrong things, like money, Instagram likes, or follower counts. These things aren’t inherently bad to worry about, but it’s when we seethe with envy over another’s circumstances that we only hurt ourselves in the process.

    How can we avoid this biased way of thinking? One answer is simply to focus on the value of the things we do own rather than the things we don’t. We can avoid anchoring ourselves on one small part of a much larger picture. If we help others more often, rather than seeing them as competition that needs to be beaten, we can create feelings of reciprocity, which leads to much more fruitful and pleasurable outcomes. Avoiding the short-term gains in true zero-sum situations in exchange for long-term ones is also a helpful reframing.

    Personally, I‘m certainly not all aboard the train that is free from zero-sum thinking, especially when it comes to my work or other interests, but if I’ve learned anything from this last year, it’s that we can all do with a bit more self-acceptance and support towards one another. If I can stop seeing situations as zero-sum so often, I will most likely be less envious and happier with my own lot.

    Someone else being good at something doesn’t mean I’m unable to reach that level if I too put in the hard work, so I should be inspired rather than defeated. That’s the mindset I’m hoping to cultivate going forward, and I imagine we all could do with a little more appreciation and a little less unnecessary competition.

    Further reading:

  • The Social Blue Light Filter

    “When one tugs at a single thing in nature, he finds it attached to the rest of the world”.

    John Muir

    “This was supposed to be the summer of George”

    This was supposed to be the Summer of George – Seinfeld Memes
    A saddened George Costanza

    On New Year’s Eve 2019, millions around the world were making resolutions of bettering themselves and enriching their lives. Finally taking the time to practice piano, adopt a vegetarian diet, and meditate every day. Personally, I wanted to take more opportunities to travel this year. I boarded a plane just once in 2019, from Phoenix to Burbank to see close friends graduate from college. Shortly into 2020, something happened that took us from bright-eyed and bushy-tailed to pessimistic, isolated, and stressed-out.

    When COVID-19 took center stage in March, I have to admit that, as an introvert, I couldn’t help being slightly relieved that I had been given a legitimate excuse to avoid all social gatherings for the foreseeable future. Instead of trying to come up with lousy excuses or feign illness, I simply didn’t have to worry about going into the office, making social plans, or dating.

    Having been a practiced pseudo-hermit, I thought that staying in quarantine from the world outside of groceries and the occasional stroll would be a cinch. Now, going on six months of the “Ronaissance” or the “Zoom Gloom,” the lifestyle that the pandemic has forced is taking a toll on my perception of the world in many aspects, one being social. Even with all the technology Silicon Valley has to offer at our disposal, can experiencing anything through a screen replace the benefits of real-world social interactions?

    Like it, Love it, Gotta Have it

    “Man is by nature a social animal; an individual who is unsocial naturally and not accidentally is either beneath our notice or more than human. Society is something that precedes the individual. Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god. ”

    Artistotle, Politics

    It’s well understood that humans are social creatures. Even if you’re a self-proclaimed misanthrope, there’s a biological component to being human that necessitates some form of social connection. Maslow’s Hierarchy of Needs places psychological needs just above being fed, warm, and safe.

    Maslow's Hierarchy of Needs | Simply Psychology
    Maslow’s Hierarchy of Needs

    According to Maslow, humans possess an effective need for a sense of belonging and acceptance among social groups, regardless of whether these groups are large or small. Feeling love or having a sense of belonging is foundational to feeling accomplished and truly fulfilled. We know that there’s more to “success” than money, and only we can define what that means for ourselves.

    As a society, we’ve been staying connected more than ever before in the last 20 years or so, mainly through social media like Facebook and Twitter. We’ve also stayed connected through the help of things like online video games, where conversations can range from catching up with your friends to yelling profanities at your virtual enemies.

    The main way we are “staying connected” during COVID appears to be through a variety of video calls. We use Zoom for work, FaceTime or Google Hangouts for friends, and maybe throw an app like Houseparty in there for good measure. We have a lot of chat-based apps like Discord or Slack that can simulate real-time talking with people all over the world over text or video, but are these options good enough for us to satisfy our needs for social connection?

    One is the loneliest number

    You may have heard that in our increasingly remote age, there is another epidemic taking place globally: loneliness. It’s really interesting that even though we’re more connected than ever in numerous ways, we’ve also never been more lonely.

    According to Dr. John Cacioppo, a Professor of Neuroscience and director of the Center for Cognitive and Social Neuroscience at the University of Chicago, the physical effects of loneliness and social isolation are as real as any other physical detriment to the body — such as thirst, hunger, or pain. He says, “For a social species, to be on the edge of the social perimeter is to be in a dangerous position.” It’s specified that to satisfy our need for social connection, we need to be around people we actually like or care about. Unfortunately, taking cover behind shopping aisles to narrowly avoid your talkative neighbor at the grocery store doesn’t really fit the bill.

    In part of a 2007 research report from Wellesley College, it’s described that scientists in the fields of psychology and psychiatry “…have now really determined, without a doubt, that our brains are hardwired to connect: that we have mirror neurons that fire in response to the firing of another person’s neurons; that we actually have parts of the brain that atrophy in isolation.” 

    We can look at the importance of human connection when it’s especially important, when we’re growing up. Our familial relationships and friendships, among other factors, are crucial to a healthy prefrontal cortex development, and helps to set us up well for our future both mentally and physically.

    There’s not an app for that, yet

    Apple Gets A Trademark: There's An App For That™ | Cult of Mac

    Social media is a double-edged sword. In smaller doses it can be highly valuable, but social platforms are designed to keep you engaged and become addicted. Technology has been able to help us become more social online, but when you throw something like a global pandemic into the mix, we can see that it’s only come so far. I don’t believe the fulfillment of our cognitive needs by social stimuli will come from a download in the app store, but we can’t let our thinking be constrained by that.

    I’m not going to throw the technological baby out with the bathwater and say technology can’t help solve this problem. Rather, it can help put us on the path to a solution. I can imagine a future where I can have coffee with friends of mine in my kitchen without them actually being there. I’m thinking that the holograms of Star Wars aren’t that crazy of an idea after all.

    Ask yourself, “How can we be with other people without being with other people?” and you’ll get a lot of answers involving looking at a screen or texting. In the future, and not necessarily a distant one, we might be able to have dinner with friends in a different state or country without looking at them on an iPad. We’ve seen demonstrations of projections occupying physical space before, so it’s not entirely out of the realm of possibility that technology will be leveraged for this type of problem.

    I think when dealing with such large and extensive problems like global loneliness, it requires more than one solution, and we can’t simply just “How might we…” ourselves out of it. Design-thinking and human-centered design can and will definitely help move the needle of solving humanities’ big issues, but we shouldn’t fool ourselves into thinking one group of people will have the metaphorical light bulb over their heads and save us all.

    Where to go from here

    The COVID-19 pandemic has exposed numerous issues in our society, from education and healthcare, to economic and social. My hope is that instead of just racing for a vaccine and trying to move on with our lives, but that we also take the time as a society to learn from this experience. COVID-19 didn’t directly cause some of these problems — it simply revealed them.

    Staying connected is a problem that has plagued us as a society since social media really took off. If I can get a hit of dopamine from a like on my tweet, why do I need to waste my time at a bar with friends? There are psychological and physiological reasons why spending time with people in person is important, and maybe as research continues we might find other ways of socializing with similar benefits.

    I don’t believe that as of this time, staying connected virtually can take the place of in-person social gathering. However, for public health reasons it has to be our obligation. It’s moderately helpful mentally, that there will be a time in the hopefully not-too-distant future where we are able to spend time with people in person again.

    As someone who doesn’t consider himself particularly social, I’ve acknowledged that I need some real face time. I took a lot of my in-person experiences for granted before the pandemic, such as going to the office, going out to eat, or to the gym. We’ve been given an opportunity for all of us to really experience a problem in our society, and we owe it to ourselves to give it a proper examination for potential improvements to the way we live.

  • Why Context is Important to Learning

    Aristotle famously said, We are what we repeatedly do. Excellence, then, is not an act, but a habit.

    When I have spare time, I enjoy reading books from a wide range of topics, and every so often there’s a “meta” book I read about the act of learning itself or personal development in general.

    Recently, I finished reading a book called, “Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your Career” by Scott H. Young. The main thesis of the book is that anyone is capable of learning pretty much anything, and gives a set of guidelines for learning a ton of information in a short amount of time.

    The book goes further than just learning conventionally. The stories and methods it discusses are specific to self-directed learning, also known as auto-didacticism. Young gives examples of autodidacts such as Eric Barone, who taught himself how to draw, make music, write code, and more to create his dream video game, Stardew Valley. The farming simulation RPG was, and still is, very popular across multiple consoles and had sold over 10 million copies, as of January 2020.

    Young writes about famous geniuses and dissects their learning styles and tries to shape the narrative that yes, there are certain people who are naturally amazing at certain things and learn certain subjects quite easily, but for the most part, many of the people whom we consider “genius” are just very dedicated and passionate about what they learn.

    Cognitively Situated

    One of the main topics of Ultralearning is the concept of directness. Young defines directness as simply applying the skills you learn directly to the way you want to use them. For example, if you want to learn how to write code, simply watching videos and going through tutorials will only help to a point. If you’re goal of learning to write code is to make a website, the best way to learn is by, you guessed it, making websites.

    The concept of directness may sound like common sense. Sure, if I want to get good at running, I simply need to run more. Practice makes perfect, 10,000 hours, and all that. But I think directness in this context goes further than “practice makes perfect.”

    Directness as a concept is seen in other subjects as well, such as neuroscience and philosophy.

    Sketch of the embodied cognition perspective [11]  

    In neuroscience, there is the theory of situated cognition. As explained in the International Encyclopedia of the Social & Behavioral Sciences, situated cognition is defined as:

    “… a range of theoretical positions that are united by the assumption that cognition is inherently tied to the social and cultural contexts in which it occurs.”

    We find situated cognition validated in the real world through human interaction with activity, context, and culture. After all, we are a combination of nature and nurture when we are growing up, a metaphor for the metaphor, “you are what you eat.”

    File:Situarrow.jpg

    Texts about this theory go on to explain that the things we know or get good at are connected with our application in our immediate physical world. Learning anything, from a new language to a physical skill is seen not as a silo out in its own world, but is actually very contextual.

    When we learn something, having context to apply our knowledge is one of the most important considerations. Thinking about why you want to learn something and the end goal you are trying to reach will not only make learning more meaningful, but you can make sure along the way that what you are learning is worthwhile to achieve your goal.

    The nature of learning requires newly acquired knowledge to be actually used to really make what we learn stick. Humans are social beings, and the way which we give meaning to what we know is by having experiences and engagements with the world.

    This is also why learning communities are so successful, for example, when looking at online courses, many of them have associated Slack channels or Discord servers filled with both current and former students to converse with. Even going back a bit further, to plain old forums, people having meaningful conversation about a subject can truly get more of out it. I guess participation points aren’t for nothing!

    Let’s be Pragmatic about it

    There is also the philosophical tradition known as pragmatism. Charles Sanders Peirce, one of the “classical pragmatists”, first defined and defended the view in the United States around 1870.

    “The core of pragmatism as Peirce originally conceived it was the Pragmatic Maxim, a rule for clarifying the meaning of hypotheses by tracing their ‘practical consequences’ – their implications for experience in specific situations.”

    Pragmatism says that meaning is found when something we know in theory is successful in practice. Pragmatists prioritize understanding things in terms of concrete tasks and activities rather than in terms of abstract theory. Pragmatists go so far to say that words don’t have inherent meanings attached to them from birth — instead, they gain their meanings through repeated use. At a basic level, I like to think of Pragmatism as “the philosophy of action.”

    We can relate this definition to learning about something as simple as an apple, when we’re babies or toddlers. The word “apple” doesn’t mean anything to a baby, but when the baby sees or eats an apple and associates the word with the fruit, the baby assigns meaning to it and recognizes what an “apple” is in a real context.

    Peirce's cycle of pragmatism These are all capacities and skills ...
    Peirce’s cycle of pragmatism 

    You can read and watch all you want, but when you get right down to it, nothing will make you believe in something more than seeing it work in the real world. I’ve read a lot of self-help books over the years, and from all the things each book will have you take away, the things I remember most are the concepts I’ve tried and seen work in actual situations.

    It’s kind of like your friend assuring you that “this too, shall pass” when something negative happens in your life. It’s easy to give advice when you’re not in a situation or haven’t experienced it yourself. Even with a concept as simple as giving advice to someone, you can see that we always trust the words of someone who went through what you are currently going through leagues ahead of the advice from a well-meaning friend.

    After everything is said and done, we might see that our well-meaning friend was correct, just as was the person who had gone through the experience themself. We still would assign more value to the advice of an experienced person over someone else.

    In Closing

    If you’re interested in the concept of learning, I would highly recommend reading Ultralearning, as it tells great stories, gives concrete examples, and is all-around an interesting read. When you hear that someone has studied the entire MIT undergraduate computer science curriculum in months rather than years, as Young did, it sounds ridiculous. When you hear it right from someone who did it, it sounds possible.

    Even though Young learned all that information in a short amount of time, what he still remembers years later is a much smaller amount of the content he consumed. Nobody would be expected to remember an entire degree’s worth of information forever, but not surprisingly, the things he does remember are the concepts and skills he used regularly afterwards.

    Applying your learnings to the real world isn’t just better for memory, it’s also more rewarding. The satisfaction of finishing a painting in real life is much more rewarding than finishing watching a YouTube series on painting.

    I want to make sure that I don’t come across as someone who is saying that passively learning things isn’t valuable, because it is. The proof is in the pudding, though, as I’m sure you’ve experienced yourself. The things we do regularly or even practice every so often, sit much more comfortably in our minds than the things we learn once, and store away in the attic to forget about.

  • Parallels of the Scientific Method and Design-Thinking

    When I was in middle school, I first learned about the scientific method. I was told there exists a process that scientists follow to make discoveries, and was amazed that I could follow the same process that impactful scientists follow in my own classroom.

    We spent many lessons conducting experiments and following this process, asking questions, and testing our theories. I was never great at science in an academic sense, nonetheless I always found it interesting and recognized its importance.

    From Design to Science and back again

    Episode 20 – The Scientific Method – COMMON DESCENT

    At its core, the scientific method is a problem-solving framework. The steps are as follows:

    1. Make an observation
    2. Ask a question
    3. Form a hypothesis, or testable explanation
    4. Make a prediction based on the hypothesis
    5. Test the prediction
    6. Iterate

    If you’re a design practitioner or in the tech industry, this process may sound familiar to you, and not just because it’s the scientific method that many of us learn in school.

    Design-thinking has become a buzzphrase, but has existed in some form for as long as people have been developing products, though it’s been popularized in the last 15 years or so.

    What is Design Thinking? – Agile Elephant making sense of digital ...

    The design-thinking framework is as follows:

    1. Frame a Question
    2. Gather Inspiration
    3. Generate Ideas
    4. Make Ideas Tangible
    5. Test to Learn
    6. Share the Story

    Lather, rinse, iterate

    The main similarities of both lie in the larger idea: testing and iteration. One of the most important aspects of both science and design is to test your ideas in real-world scenarios and iterate as necessary.

    Iteration in this context is not repetition, we’re not embodying the old saying, “…doing the same thing over and over again, but expecting different results.” Iteration differs from repetition because when we iterate, we slightly alter what we are testing, not testing the same version over and over again.

    Within each iterative step, we test the same version multiple times, but without changing what we test, we’re not getting the best bang for our buck with each round of testing.

    No idea is going to be perfect the first time around, so its up to us as developers of ideas and problem solvers to continually iterate and tweak our solution until it’s the best it can be in the given scenario.

    When it’s more trouble than it’s worth

    Something important to remember is there comes a point of diminishing returns. This applies more-so to product development than scientific work, because developing software doesn’t usually come with the responsibility of something like a life-saving drug.

    We can observe diminishing returns when we enjoy a freshly baked chocolate chip cookie. The first cookie is heavenly, so we decide to have another. The second cookie is also delicious, but not quite as enjoyable as the first. If we continue down this path of cookie inhalation, after four or five (to each their own), we probably won’t enjoy them at all anymore.

    We can iterate twenty times on something and it might be better than it was the fifteenth time, but after going through the process enough, we can gauge when is truly the balanced time to call it quits.

    There’s no hard and fast rule around iteration in software because every situation at each company is different. At some companies, they may find that the sweet spot for them is testing and iterating three or four times before delivering, while others may take eight to ten iterations.

    With software (and possibly science, but I wouldn’t know for sure since I’m not a scientist), we have deadlines to hit. We can’t spend the rest of our days iterating on a feature or product until it’s perfect, which is an illusion anyway.

    We can iterate and test a handful of times before we have to deliver tangible value to our customers and the business, so it’s important we test the right way, just enough, before iterating again.

    Companies tend to follow the Agile methodology, so they have a framework to follow where they can consistently deliver value to the business and the customers over time.

    In the days of yore, the de-facto way to build product was using the Waterfall methodology, in contrast to Agile, where teams deliver value in one big release after months or even years of development.

    Testing can be hard to do, which is why so many companies simply don’t do it at all. It takes time and upfront effort, and when companies equate value with production, it can be a hard sell to continually do it. There’s been a ton of writing around how to conduct lean user research, so I won’t write about it here.

    Our work is never done

    Both science and design are “never finished”. Have you ever used a successful piece of software that never has updates? Or seen a news article that reads something like “Coffee is good for you” a year after reading an article titled “Coffee is bad for you”?

    That’s kind of the point, though isn’t it? Scientific studies are happening all the time and they’re always proving and disproving hypotheses.

    Nothing is certain 100% of the time, especially when dealing with something as erratic as human behavior. As outlined earlier, science usually has more weight to it’s decisions that software, so we don’t need to think that an immutable scientific truth such as gravity can be disproven as easily as a pattern in a social media application.

    I like to push the importance of “done over perfect” in my job, but of course there’s a lot of nuance in that phrase. There’s a balance that we are all striving for, while trying to destroy the perfectionist in ourselves. We want the things we deliver to be great without being reckless in our delivery or overthinking ourselves into a rut.

  • I’m No Longer Trying to Be the Best

    I find it difficult to take on a new hobby or activity in my life that isn’t some kind of means to an end. What I mean by that, is I can’t start drawing because it’s fun or relaxing, I have to have aspirations to be an artist. I can’t just code for fun or to learn, I have to have a goal of becoming a developer.

    This goes for the vast majority of things I do outside of work, and it extends beyond leisure. Maybe it’s some kind of competitiveness that I have with myself, or maybe it’s a competitiveness I have by seeing others accomplish so much at such a young age.

    Being a generalist with many interests is a double-edged sword. In some aspects, it’s great because I get to expand my horizons beyond one field or subject, and can more easily see patterns and adapt to what may come to me. On the other hand, it can be frustrating to not be a “master” in something, although I am naturally better and more interested in some things than others.

    I envy those who can simply do things for fun, or for its own intrinsic value, without a goal of becoming incredible at it. It’s not necessarily a bad thing to want to be good at whatever you are doing, but it becomes a problem when it removes enjoyment from the experience and a hobby is no longer a hobby, but almost another job.

    For me, I think it’s ingrained in my head that I want to try to be good at whatever I set my mind to, but there is a balance I believe I can get to where I can enjoy an activity because I enjoy the process, while having some smaller goal I can set my attention to.

    What is intrinsic value?

    When dealing with finance,

    “Intrinsic value is a measure of what an asset is worth. This measure is arrived at by means of an objective calculation or complex financial model, rather than using the currently trading market price of that asset.”

    Investopedia
    intrinsic value finance graph

    This piece from the Stanford Encyclopedia of Philosophy is interesting, discussing the notion of intrinsic vs extrinsic value, and if it even exists at all.

    At a basic level, something has intrinsic value when it is done for its own sake. Like I wrote above, you cook because you enjoy it, not to become a chef. You play music because it’s relaxing and fun, not to become a famous musician.

    Anything can have intrinsic value, if you apply it to an activity or object. You may have heard someone answer the question, “Why do you like this thing?” with “I just like it.” Which is actually a perfectly acceptable answer, no matter how passive it may seem.

    I also feel that living in a capitalist society, where to be of worth you need to be making money, could influence how people have difficulty assigning intrinsic value. Why would I do something “for fun” when I could be spending my time making more money? Or why start doing anything if my goal isn’t to become “the best’?

    In my opinion, American society thrives on competition, where starting a business and making a living isn’t good enough, you have to be bigger and better than your neighbor. You have to have the more expensive car, shop at the more expensive grocery store, and have more social media followers.

    Competition is a good thing, because it drives us out of our comfort zones and makes us demand more from our lives and the things we do. But just like anything, when it exceeds moderation, it can become a drug in itself, and a never-ending race between yourself and others, but it’s really between you and yourself.

    keeping up with the joneses

    How do I reach the point where I do things for their own sake?

    I believe a way for me to begin applying more intrinsic value to some of the things I do, is remember why I started doing them in the first place. If I have a full-time job where I make enough money to live on, I don’t need to do anything else for monetary value. I can if I want, but it’s not required.

    I’m speaking from a place of privilege of course, because there are many people who don’t have the luxury of leisure time, a hobby, or just relaxing. They have to work more than one job, take care of family, and do everything we all have to do as adult humans, which, with one full-time job is a lot.

    A good start is acknowledging when something is teetering into the “competitive with myself” range, and writing down why I started doing this thing in the first place.

    Setting smaller goals, and working towards those is also a good practice, instead of having a goal of becoming a “the best”, which is subjective anyway. You’ll never be undoubtedly “the best” at anything, really, so we should strive to just be better than we were yesterday.

    This is all easier said than done, but without taking action, we become stagnant and complacent.

    By acknowledging the problem and setting small goals, we can become better at applying intrinsic value at the things we want to, which is a skill in itself.

  • Hybrid Working is the Future

    Working remotely has been becoming more important as an option for prospective employees when searching for their next venture. If I have kids that sometimes fall ill, will I be able to attend meetings virtually and still take care of my business from home? If my car breaks down on the way to work, can I make progress on my presentation from the repair shop?

    These kinds of questions are running through the minds of current and future employees, especially in the tech space. Granted, most jobs in the tech industry can be done remotely. Software engineering, design, QA, and even product management, can all be as successful as a distributed team as a colocated one.

    I have a feeling that more and more we will see companies not just allowing X number of work from home days a week or month, and more of all the team is remote on Wednesdays, or something similar.

    Right now, I have about a 45 minute commute from my house to my office. As I’ve gotten used to the drive, along with the fact that I listen to audiobooks or podcasts on the way, it has become less and less of a big deal. Of course, I’d like to have a shorter commute, but for right now it could definitely be worse. I’ve spoken to people who have commutes extending over an hour each way, which I find ridiculous.

    Kramer getting on the train

    Of course, there is a case for a bit of a commute, as for many people it’s the only time in the day where we are completely alone with our thoughts, listening to music, a book, the news, or appreciating a bit of silence. I enjoy listening to an audio book while driving, but would I rather have two hours of my day back if I didn’t have to commute? Absolutely.

    The reason for remote working is not just about time in our day lost by driving or taking the train. When it comes to hiring, you really can’t beat the fact that you can hire anyone in the world, rather than those just in your area code.

    As rent prices in the Bay Area have reached astronomical prices, as well as cities like New York, Seattle, and Chicago, it just makes the most logical sense to have employees than can live anywhere rather than forcing them to be in one of those places.

    Companies like ZapierGitlab, and InVision have shown that teams can be successful even though they don’t work next to each other.

    Another common theme I’ve noticed, especially among designers, is that when they need dedicated time to get some work done, they choose to work from home. With most companies opting for open offices, with all the pros and cons that they come with, it’s no surprise that to be the most productive, employees can have a difficult time concentrating at the office and need to work from home to be uninterrupted. Ambient noise, meetings, getting pulled here and there by coworkers, etc. are all things that have an effect on employee productivity, when dealing with work that requires uninterrupted “flow” state concentration to be the most effective.

    annoyance at the office

    It’s easy to walk into any newer office and think, “Wow, all this is such a waste of money.” The physical perks, furniture, office space, and many other things contribute to this opinion.

    While I enjoy going into an office, seeing my coworkers and spending face-time with them, there is no doubt that, from a productivity and financial standpoint, remote working trumps working in an office. If I were starting my own company tomorrow, while trying at first to find workers locally, I would keep my search open, and allow remote employees from anywhere in the world.

    There are, of course, things that become an issue when working remotely, such as communication, transparency, and collaboration. But all these things can be overcome with good organizational design, effective tools, and a good on-boarding process.

    I don’t want to say that it’s as black and white as good and bad, because, especially at this point in my career, I appreciate being in close proximity to my coworkers, learning from them in person, and feeling like I am a part of a physical group.

    It’s difficult to argue the fact that a long commute, sometimes unnecessary office perks, and all the extra costs that fall under the physical office space, are not the best use of funds in all situations. I feel that hybrid working is the future, and starting today, most companies can stand to allow remote working when the employees deem needed, without compromising any productivity.

    Something to think about: Do you have a long commute to work every day? Are you able to work remotely if possible? If not, see if your company is willing to implement a program to allow for more flexible working.

    This was originally published on Prototyper.

  • Being the Only UX Designer on an Agile Team

    I started my UX journey as a solo designer, and I found the dynamic of being on a team is totally different than being solo. When you’re solo, you don’t have to worry about collaboration or going to a ton of team meetings. You have enough other things to worry about on your own, though.

    Having recently worked on an enterprise software team with Development, QA, and Product team members, I learned a lot about working together, design advocacy, and compromise.

    I wanted to share some of the things that I’ve found to be important when working as a UX Designer in general, especially as the only one on a team.

    Communication

    comic on the phone

    I’m sure I don’t have to tell any of you that communication is one of the most important factors when it comes to working on a team. Regardless of position in your organization, everyone needs to be a good communicator.

    *Being on the same page as your team is possible simply by attending stand-ups and staying active on team chats or emails.*Stay updated on what the team is saying and working on even if you’re not meeting that day.

    Be clear about the goals and responsibilities of the team. Everyone should know what everyone else needs to do so if anything comes up, they know who to go to for questions or clarification.

    I also learned to voice my opinions and concerns when they arose because it’s always better to talk about things too early rather than too late.

    Collaboration

    people working together

    UX design is never done in a vacuum, so working with others is integral to the success of design, even if you’re the only designer on your team.

    Personally, I worked very closely with my product manager, because he had much more domain knowledge than me and could help educate me on the more complex parts of our product.

    When I had a question on technical capabilities, I would go to one of the developers. Luckily for me, they were always ready to explain something to me or go over an idea and the technical feasibility of that idea.

    Although I was the only one doing the design work, I would frequently check in with my team to make sure I was on the right path.

    Advocacy

    megaphone icon

    As UX designers, we can hope that our teammates all understand what UX is and what our contribution to the product should be. Not everyone is lucky in this regard, so as designers we have to do our best to advocate for ourselves and our value.

    *If the team is talking about a new feature or changing an existing one simply by suggesting solutions, part of our job is to redirect attention to the users, the research, and the business goals.*

    By aligning these things in our thought process, we can make better decisions from the get-go instead of backtracking later.

    The team dynamic will reflect how they approach problems and view the UX process. Some teams will see the value in things like research and user testing, others will see it as a waste of time. We have to adapt to our own environments and try to get on the same page as our team as best as we can.

    Active Listening

    active listening

    Along with the other items on this list, this one is like a superpower if used correctly.

    When you’re actively listening, you are fully concentrating on what is being said rather than just passively ‘hearing’ the message of the speaker. This comes in handy when your team is discussing things like the product roadmap, sprint goals, or when a team member is bringing an idea of their own to the table.

    If you’ve concentrated on their ideas fully, you can give better feedback and additionally, get more out of constructive criticism that is bound to arise.

    When you’re picking up on things like the speaker’s behavior, tone of voice, and reasoning for their statement, you’re already ahead of the game when it comes to listening comprehension.

    Wrapping Up

    Obviously I’ll never be done learning, nor do I want to be, but these are just a few of the main things I’ve taken away from my time working on a team building software.

    • Communication
    • Collaboration
    • Advocacy
    • Active Listening

    Something to think about: As the only designer on a team, what kinds of things have you learned so far on top of these topics?

    This was originally published on Prototyper.

  • Undertale

    undertale logo

    Undertale is an RPG created by an indie game developer, Toby Fox. In the game, players control a human child who has fallen into the Underground, a large, secluded region underneath the surface of the Earth, separated by a magic barrier.

    The player meets many different beings along the way, some nice, some not so nice. The player also engages in battles and conversations, which the actions dictate the outcome of the game.

    undertale battle scene

    Undertale was made by one person, but exhibits humor, emotion, replay-ability, and ease of access. He not only made the whole game and all of the music, but put so much thought into the relationships that it can feel really great or really regrettable depending on your choices.

    undertale talking to Sans

    Character development goes so far as to a pair of skeleton brothers who’s names are the font of their words

    The game is simple, with only arrow keys as controls for most of the game. This allows essentially all players the ability to enjoy the game. From a usability standpoint, this is great because many people with disabilities cannot use a mouse at their computer. Most PC games require mice and sometimes even an exceptionally fast gaming computer. Undertale is beautiful because it opens up its walls to all and anyone with an interest can play the game.

    The graphics are charming, but pixelated. There is no 60 fps required for enjoyment. All of the factors of this product are accessible, and Mr. Fox thought of these when creating his claim to fame.

    The player and your caretaker, Toriel.

    The game represents diversity, even when talking about monsters. You feel like you are in another world, but it seems familiar in a way. The game gives you the freedom of choice, along with repercussions for your choices, good and bad.

    This is a good example of what a game can become with the right intentions. The game is usable, ethical, and accessible. The reception reflected that and made Toby Fox and Undertale very well known in the indie game world.

    Something to think about: What other games can you think of that give a nod to usability and ethics?

    This was originally posted on Medium.

  • Information Architecture (IA)

    Information architecture (IA) is a professional practice and field of studies focused on solving the basic problems of accessing, and using, the vast amounts of information available today.

    In simple terms, it answers the questions:

    Where am I? What am I looking at? Where else can I go?

    Information architecture if done well, allows the user to navigate around a website or application with ease, and gives them the best way to accomplish their goal.

    The term “information architecture” was first coined by Richard Saul Wurman in 1975. Wurman was trained as an architect, but became interested in the way information is gathered, organized and presented to convey meaning. Wurman’s initial definition of information architecture was “organizing the patterns in data, making the complex clear”.

    venn diagram of information architecture

    This is commonly how IA is represented.

    There are two main approaches to defining an information architecture. These are:

    • Top-down information architecture: This involves developing a broad understanding of the business strategies and user needs, before defining the high level structure of site, and finally the detailed relationships between content.
    • Bottom-up information architecture: This involves understanding the detailed relationships between content, creating walkthroughs (or storyboards) to show how the system could support specific user requirements and then considering the higher level structure that will be required to support these requirements.

    The most common methods of defining and IA are things like site maps, page templates and layouts, to personas and storyboards.

    Without a good IA, people will have a difficult time finding what they need to on a page or website, and most likely not come back. Just like with architects of buildings, if they do not design with codes, accessibility, and the user in mind, the final product suffers.

    Something to think about: What websites do you frequent that have good IA? Which ones are not so good?

    This was originally published on Medium.