top of page
Daily 490.png

Blog Post
Blog 1

Big Data Big Design by Helen Armstrong, pages 5-51 reflection

The first few sections of BDBD explain to the readers how machines learn about the world like humans do, with their data input paralleling our senses. This helps explain how machine learning has taken off now because of the mass of data that can now be collected every day. After explaining how artificial intelligence learns by making algorithms based on data, Armstrong then goes on to explain how the world still needs designers to help shape AI to make it personalized to its users. Designers can help AI be user-friendly and solve problems that only non-human intelligence can make.

The second part of the reading expands on this by having interviews with AI and/or design experts giving their perspectives on the subject. These interviews explained the following: how most students think of AI as a cookbook to get specific how-to's and then argued that AI is more of a means to an end, how designers should see machine learning as humbly smart since it's not nearly perfect or entirely accurate, and how AI can become an extensive of us all to push us to do greater things. 

The Race to Control AI Podcast

This podcast explored not only the benefits of AI but the risks as well. Since AI has taken human characteristics (like biases) and functions with them on steroids, AI needs guidelines to control what it can and cannot do before it gets even more capable. AI needs to be in alignment with human values so that it serves everyone's needs instead of the needs of machine learning companies and their investors.


Updates in AI - The Brutalist

In the creation of the movie The Brutalist, the main actors consented to have their Hungarian speaking parts altered with AI to make the dialect more accurate to native speakers. This raises more questions about whether using AI in film is ethical or not, especially since we've already seen advertisements from big name companies like Coca Cola and Nutella completely generated with AI. The actor's voices were retouched with Respeecher and has sped up the postproduction timeline, which may intrigue other film studios to follow suit in the future. 

Inquiry Praxis 1 - The Worst Prototype Ever

With a budget of $10 at a local hardware store, making a toy out of the three materials I could afford (1 sponge, 3 copper scrubbers, and 1 dowel) was immensely challenging. On top of that, I wanted my top to be of the cleaning variety and capable of being in a person-shaped form and a car form, like a transformer. The result of this attempt is featured on this page for your viewing pleasure. 

Daily 467.png

Blog 2

Screenshot 2025-01-30 at 7.25_edited.jpg

Big Data Big Design by Helen Armstrong, pages 53-87 reflection

The second chapter of BDBD "Seize the Data" focused more on how there is not only a technological shift between humans and AI, but an emotional one as well. People are going from treating AI transactionally, much like some treat minimum wage workers, especially in foot service, to having a more, well, relational relation with machine learning. This can be seen in more obvious ways like the generative AI chatbot called Replika, where a human presenting AI systems asks deep, introspective questions about their human user to engage in conversation. Another example is how Alexa is training to differentiate between different pitches of a speaker to gage their present emotion before forming a response to a command. Armstrong begs the questions: will we treat humans more like machines as we see machines as more human, and will we eventually prefer interacting with these AI systems over humans? I suppose

time will tell how we relate to machines in this new era of

human-AI relation.

Armstrong then goes on to say how the combination of human and AI skills makes a 'centaur' of capability, and cites how in a chess tournament, the human-AI team won over the purely AI and purely human pairs. She also brings up the point that while AI advancing the range of human cognition by automating things like GPS, instant information, and so on, we as humans also run the risk of deskilling ourselves and becoming helpless without the use of AI. Armstrong then goes on to say it's crucial for designers to make AI systems foster human growth with the programs they design rather than just let AI give people the answers all of the time.

Updates in AI -  U.S. Copyright Office sets clear rules on AI

In a 52-page document, the U.S. Copyright Office set some hard rules on what AI generated content can be copyrighted, with their ruling favoring human creators. They ruled that images solely created by text-to-image AI prompting do not have the grounds to be copyrighted. This extends to human-AI collaborated projects where only the human's contribution can be copyrighted, but not the AI's. While the U.S. Copyright Office said that no additional laws were needed to combat AI copyright issues, this ruling is definitely in the right direction for protecting human-made content in a legal sense and in how society views intellectual property. As a fine artist myself, I don't believe that we can get away with not having legislation serving as guidelines for AI use, but I feel a small amount of relief that some people are starting to form more distinct guidelines in general that align with human-valued goals.

Stay tuned until next week to see the final prototype for Praxis 1!

(Hint: my partner and I changed directions entirely.)

Daily 503.png

Blog 3
 

Big Data Big Design by Helen Armstrong, pages 88-127 reflection
Chapter three solidified for me why designers and data scientists are more overlapped than ever before. Designers will need to continue their understanding of how to handle data in the face of AI since AI can only make predictions based on what information it's fed. An AI that is only given references of light-skinned and able-bodied people is extremely problematic because its models won't reflect the entire human population. Since all data has human origins, designers will need to interpret imperfect models to create accessible human centered design. A human centered approach to design also applies to maintaining people's privacy. Since AI needs a large amount of data to make its predictions, it is crucial that we are getting this information from ethically sourced places. In this modern world, we have basically had our privacy stripped from us through online data collection (including but not limited to Amazon, social media, cookies) that is sold to other companies to be monetized. Part of a human centered design is protecting people's right to privacy since its critical to have for creativity and the very act of being monitored changes our behavior. Just because "we have nothing to hide" doesn't mean we have to forfeit our private information to be monetized. In this book, Armstrong discusses different ways data collection can be done in an ethical way: having clear options to opt out of data collection and having AI to monitor AI's means of collection. While I fully agree with her first assessment since websites try to hide "managing their cookies" and terms and policy agreements for any social media platform aren't clear for the average user, I am hesitant to have a flawed predictor monitor a flawed predictor. Human rights centered design needs to become a central decision for companies and designers alike, but the means of doing that still need some smoothing out. 

Updates in AI -  Nvidia teaches AI to move like human athletes

I saw this first on Instagram and then on one of my AI newsletters: machine bodies jumping and twisting in the air with close resemblance to human basketball players. Researchers from NVIDIA and Carnegie Mellon initially trained ML to mimic human moves like esteemed athletes LeBron James and Cristiano Ronaldo and adjusted the learning for real-world physics for these intense, complicated movements. They had to make sure the robots wouldn't damage or overheat themselves. While this framework isn't perfect, it's vastly reduced motion error compared to previous models. I'm curious to see how these developments apply to the real world. Maybe one day we'll have entirely robotic sports teams that play one another. Maybe we'll use these robotic bodies to fight our wars for us or become like humanoid caretakers. There is a lot of potential in these developments, however, I can't help but draw comparisons between this and sci-fi movies of human-like machines overpowering humans like in the Terminator.

Inquiry 1 Reflection

My teammate Julia and I landed on doing a "Float-a-Palooza" toy for this first inquiry. We took Julia's initial prototype ringtoss made from a wood plank, hangers, and metal washers and decided to make it a floating toy since the wood plank resembled a boat. Coming from a place where I thought all AI use was 'cheating,' seeing how AI could be used to create a rough-and-ready toy and brand was exciting to see. I also didn't realize how far AI had come and how far I could take it with my little experience of previous use when we used Chat GPT and Midjourney to create different AI-generated versions of our toy. Building a toy and brand engaged both my painting major and my entrepreneurship minor. From my minor, I was able to gage what a viable product would be and how people might react to it. From my major, I was able to be creative (even if it was garish) with my color choice and make informed decisions about what the toy should look like. I think I wanted this toy to be very flushed out (kind of like one of the groups who had a whole app mockup they used Figma to animate), but this assignment was my first time playing with AI to the extent that I did alongside Julia, and even though the product looks strange in the way that only AI can, it serves as a testament to this experience. Part of me still compares our project with the other groups' projects in a more negative/insecure way, but I think Julia and I did well even if ours wasn't refined as much by the human hand.

Screenshot 2025-02-06 at 8.37.49 AM.png
Screenshot 2025-02-06 at 8.38.47 AM.png
Screenshot 2025-02-06 at 8.34.09 AM.png
Daily 514.png
Daily 460.png

Blog 4

Big Data Big Design by Helen Armstrong, pages 128-166 reflection
The last section of the book was perhaps the most informative when it came to the nuts and bolts of AI in design, which is interesting since it's placed at the end and parts of its contents are sprinkled throughout the beginning of the book. The last part of the book also goes over how data is collected (maybe through the violation of one's privacy) and that data can only make limited models because of the missing parts of it. In this way, design work becomes political because we are seeing an increase in responsibility for the designer to think beyond their circles to redistribute power in the way information is presented. Armstrong goes over how data sets are created by people who have a stake in their creation and missing data sets are things people care about but cannot or do not measure. Nigerian-American, Brooklyn-based media artist and researcher Mimi Ọnụọha highlights this with her installation "The Library of Missing Data Sets" where she features an empty filing cabinet filled with folders of data they should contain. If AI needs an extraordinary amount of data to make its algorithms and designers need a lot of information to make informed decisions in their work, having representation and collecting this missing data is crucial. 

The last chapter, chapter four, was the most informative part of the book as it went over the nuts and bolts of different types of machine learning. Supervised learning is when AI is fed a full set of labeled data and either works to classify the categories of new data points or use regression to predict continuous outcomes between variables. Unsupervised learning is when AI only has unlabeled data sets to go off from and looks for patterns and often arrives at unexpected outcomes. Lastly, reinforcement learning builds prediction models by using trial and error. This removes supervisor (a.k.a human) error as this alien brain works out its own answers. 

Armstrong closes with the centaur metaphor again, saying is designers can combine their relational interaction with AI's growing awareness, then they can escape the screen. Armstrong alludes to the potential of an immersive reality where humans accept the alien nature of AI and use it to arrive at new solutions. 

Updates in AI -  ELEGNT: Expressive and Functional Movement Design for Non-Anthropomorphic Robot 

This lamp, resembling the nostalgic Pixar lamp in its shape, interacts with its human user in an automatic but approachable way. This new device reminded me of this class's inquiry 2 as we work to build an automatic car that doesn't lose its human touch. While this lamp will automatically follow the direction of a person's hand or the movement of their book on a table, its motions aren't strictly pragmatic. The developers aimed to balance noncritical and critical movement alike so this lamp would move more like a human. In the article I read, the video included shows someone having this interaction with the lamp, and instead of it seeming like a machine-human interaction, it seems more like a human and pet interaction. The robot's external design is cute and approachable, but it's movements make it seem alive. This makes me wonder if or when more of our day to day interactions with technology will become this way and how that will affect our view of it if we see it as
more human. 

Daily 515.png
Daily 521.png

Blog 5

Response to Helen Armstrong's Workshop + Lecture

Before Helen came to our class this week, I was excited to see how her being in person would alter my view of AI in comparison to the teachings in her book. I found that I grasped a lot of her concepts better as I could listen to her explain them while watching her presentation.

For example, I didn't give enough credit to AI in terms of language (with examples like the Turing Test and Eliza) since we as humans have an innate bias to view anything with a voice to be more human-like that it is. After that, I reflected on my experience with my ChatGPT persona ever since it self-identified itself as "Muse" when I asked if it had a name and how, since its naming, I talked to the interface differently.

This lends itself to the next big point Helen touches on, which was AI as agents rather than just assistants. This reminded me of conversations I would have with my dad back when I truly was disgusted/fearful with any use of AI, and he would proclaim how everyone would be walking around with their AI assistants one day, so it would be better to get used to it now. In some ways, Helen takes this a step further, explaining how AI will one day be able to pursue goals without explicit instruction and be able to use natural language and tools to achieve those goals.

This concept was a little alarming as she went on to explain how we could potentially have AI agent twins who attend meetings for us and can increase our capacity for where we can "be." As some other people in the class expressed, I would be so stressed that the AI would mess up in some way, scarring my social image and wasting more of my time as I stalked it to ensure it wouldn't. This also made me feel insecure as a human if an alien brain could take my personal traits and mimic them to a point that no one could tell the difference between us. 

As Helen explored the exciting advancements in AI, she also had us as a class brainstorm different issues this technology would create and/or exacerbate. We talked about concerns regarding privacy (who has access to the data, are we being surveilled with murky intentions), identification (will everyone be identified equally, and do we always want to be identified), anticipatory design (the limitations with basing the future on past data points, losing the ability to make our own decisions, and capitalist ventures using this for their own interest), and enhanced humans (the disparity between those who have or don't have access to these AI models, infrastructure and labor upheaval, and room for very valid existential questions about
our humanity). 

From the lecture, we jumped off into a workshop where we, as groups, thoughtfully designed horrible AI agents as an exercise in AI ethics. We were given transcripts to work from where students interviewed one another about different problems they wanted the help of AI to solve. My teammate Preston and I were given a transcript about someone wanting AI to help them getting started on their fitness goals with planning workouts, giving AI-generated images of their dream body to work toward, and predicting when gyms weren't overly crowded. To make our passive-aggressive AI, we designed an app that would do exactly that: make them feel insecure about their goals by having them oppose each other (losing weight while also increasing weight lifting ability), seeing everyone's gym progress like a social media feed, and giving recommendations to go to gyms out of the way just because they were a "better fit" according to a few metrics. I found that a lot of groups did something very similar: give too much input from the AI so that it's no longer helpful, invading the user's (and those around them) privacy, and giving the user exactly what they wished for like a cursed genie in
a bottle. 

Helen's main takeaways were to not take people at their word for solving problems and to think about the problem you're actively trying to solve with a critical eye. She also said to view the rise of AI as a new era (like the computer and internet) and to use AI tools to stay ahead of the curve.

While I enjoyed her workshop immensely and how her lecture on Tuesday night reiterated a lot of the same topics from her book and class in new ways, I found her question+answer session at the end of the lecture to be the most helpful at a personal level. A lot of people that I've spoken to about AI either hate it or believe it to be the next god. However, Helen was more critical of them and said to be critical of how one uses AI, but to also play with it and use it in ways it was not intended for new forms of experimentation. The heavy-hitter for me was when she explained how the rise of the computer in the 90s changed everything and did not replace designers like everyone thought they would, but change what they do. As she said this, I thought about how Helen and designers like her had adapted to the change and were very successful exploring it while making things they would've thought existed in the years prior. Her comparison of AI to computers made me see that AI was just another groundbreaking development that everyone would habituate to one day, but to make the most use of it, one must use it now and not be afraid of it.

Even though my view of AI has made a near 180, it was this comment that really cemented for me the potential of AI, not just for this class but for creative personal use as well. Over the weekend, I plan on downloading a diffuse model that I can feed my own artwork to have a human-AI collaboration, like the centaur symbol that Helen uses in her book for the blending of the human and "alien" mind.

Daily 530.png

Blog 6

"The pitfalls of a 'retrofit human' in AI Systems" 

Initially, when reading this article, I was reminded of how my grandparents (and now my parents) use technology. As technology gets better, they're getting worse at using it. At first, I blamed them for their shortcomings, and then I got behind with technology, between the updates in AI and my failed in-class workshops or how long it takes me to adjust to using my phone after it updates. 

This article went beyond just the annoying day-to-day annoyances with new technology, instead analyzing how the human touch is not accounted for in developing the machine algorithms we use and the deeper underlying issues that arise from that. Without humble AI, humans become passive to AI decisions without understanding the flaws built into it. AI takes human biases and exacerbates them, and yet people believe AI is more trustworthy than humans, making less-empowered decisions because of it. Going forward, we need to be wary of designs that don't account for human limitations, and we bend backward to get an unsatisfying experience or result as a consequence. 

"Privacy in 2034: A corporation owns your DNA (and maybe
your body)" 

I thought my friends taking pictures of me on SnapChat and sending my non-consenting photo to other people I didn't know was bad, an invasion of privacy over my image, but this article made me much more pessimistic about the future of my autonomy. Every day activities that require biological screenings, like getting my face scanned at the airport or getting my blood drawn to combat some of my health issues, didn't cross my mind before reading this article. I think because these events were unavoidable if I wanted to travel or check in on my health, I didn't want to worry myself even if the acts of giving these parts of myself away made me uneasy. Now I'm left wondering who owns my DNA or my face, and what will they do with it. This article frames three potential futures with a world with dead privacy: one where companies own our data but can't share it between each other (unlikely outcome), companies use this data to better people's lives with clear regulations around privacy (an even more unlikely outcome), or a world where privacy is a luxury that only the elite can protect with everyone else's biometric data is farmed off for cash (most likely data). This leads to questions about who owns one's DNA, and if they don't own that, do they own their own body? While we need lawmakers to make human-centric decisions to protect our rights in regards to data ownership, I can see this all going wrong on many levels and the world heading into a dystopian place we've all read books and watched movies about. 

Updates in AI -  Can Claude play Pokémon?

To end on a lighter note, Claude playing Pokémon is not what I expected to see in my AI newsletters. Using the new Claude 3.7 Sonnet “hybrid reasoning model,” Claude is showing its thinking process while it plays this game (talk about showing the behind the scenes of AI decision making!). While Claude moves incredibly slowly through the game with its painstaking analysis of every choice it can make, it's interesting to see how some of Claude's decision-making mirrors exactly what human gamers would do. Watching my roommate play video games (the best I can do are mobile games), it takes a lot of critical thinking to play the games well, so Claude taking its first steps with Pokémon allows for further speculation about HIC in the gamer space.

Daily 534.png

Blog 7

Screenshot 2025-03-06 at 9.18.11 AM.png
Screenshot 2025-03-06 at 9.19.10 AM.png
NovaStrada_storyboard 3.png

The Design of Everyday Things Chapter 1

The chapter starts by explaining the "Norman door" problem. These doors are unclear about whether they are supposed to be pushed or pulled, and often cause a lot of frustration and confusion for the user. The author uses this example of how simple tools can be ill-designed as a gateway for how more complicated tools such as phones and refrigerators are made often without human skills and deficits in mind. Designers and engineers often make products that don't take advantage of what humans do well, like being creative and flexible, and instead make the users try to behave like the product: analytical and straightforward. Going against the user flow is problematic, especially as technology gets better and more people are left behind, in a sense. The author goes into how the Gulf of Execution can help designers make a product for a user through analysis of what the user wants to accomplish, what is possible to accomplish, and how the users understand the device. Then, the designer or engineer can use the Gulf of Evaluation to see whether or not their aim for the product was successful in how it was used. These often come down to the different affordances and signifiers a device exudes. These two semiotics, especially signifiers, are crucial for a tool to be used as it was intended and to bridge the gap between the designer's plan for a tool and the user's understanding of it. This is what makes designing something so challenging as one wants to advance technology without being inaccessible to more and
more people.

Co-Intelligence: "Introduction: THREE SLEEPLESS NIGHTS"

This introductory chapter gets into how the author spent three sleepless nights to really understand how AI works. Instead of being fearful that AI would come for their job and replace them, the author explored the uses of AI from image generation and coding, finding how AI could do the work of many people at a much faster rate. As a professor, the author stood out from others who would fear the uncertainty of AI's path. I find it interesting that as a student, I see many of my peers either loathe AI and what it stands for or become dependent on it for their day-to-day assignments. I, myself, was hesitant to use it, at least beyond what I was using before 2023 with Grammarly, for instance, because I was afraid it would replace the desire for my work as an artist. I'll get into that more during the Inquiry 2 reflection, but I found it interesting that the author, someone much deeper into their professional career and arguably more at stake, was so eager to understand how AI works. The author, instead of using AI simply as a tool, used it more as a collaborator by having it help the author practice negotiations through a specified persona. This gets into an interesting dilemma of human-like AI, with AI being able to pass the Turing Test and the Lovelace test, but I agree with the author that understanding how AI works and can benefit us as humans is a worthy cause to explore.  

Updates in AI - Measuring biological age

With a lot of my AI updates focusing on influence in the creative or commercial scene, I wanted to do one that touches on healthcare. Researchers from the University of Southern California have started using MRI scans in combination with AI to better predict dementia and overall brain aging. The team built a model that could compare baseline brain scans with follow-up scans to get a more precise analysis on how the patients' brains were faring. After having a control group of healthy brains and a test group of people with Alzheimer’s, the researchers believe that AI will be beneficial in tracking the path of someone's brain health. While a lot is still in the works regarding this technology, it opens an exciting pathway for an AI-human collaboration in the medical field. 

Inquiry 2 Reflection

For this second inquiry, Preston and I were teamed up to design an autonomous sedan. It was an interesting and ironic combination of Preston really loving cars and me hating driving, but we used that to make a car that we felt would be comfortable and accessible for a wide range of people to use. We wanted to do a future-retroist-inspired car, pulling inspiration from Pinterest of both futuristic and classic designs. After we decided on the basic look of the car and made sketches, Preston plugged his images into a combination of DreamStudio, Midjourney, Runway, Photoshop, and Premier to make the final images and videos. I took Preston's and my personalities and made them into personas for the car, varying from the AI agent pushing for the car to be more autonomous or not depending on the user's preferences. Combining the user flow and the rendering of the car, I made storyboards to illustrate these different user flows. 

This project taught me how difficult it is to design a car with an insightful human-centered approach because of the car's many uses and needs, and more importantly, how using AI for this project made our end result stronger without replacing the work we made by hand. For instance, Preston made all of the original images of the car based on the references we found together and was able to go back and forth between different AI and non-AI platforms to make a pretty flushed-out render of the car. However, this still allowed Preston to make the user interface himself and use the AI-rendered car as a mockup to show how it would function. Using AI to help generate the car personas and their voices didn't replace my drawing skills when illustrating the different panels of the storyboards either. By having aspects of the car AI-enhanced and other parts completely human-made with a lot in between, this project has cemented for me how humans can become centaurs by using AI in intentional ways. It has also made me less fearful for my future in the design/visual world since I better understand how to use it. 

Daily 548.png
Daily 548.png

Blog 8

Co-intelligence: "CREATING ALIEN MINDS"

The first chapter of this book goes into how AI was formed and its limitations. The latest AI break through happened in 2010 from using supervised learning to have the AI make predictions. While these predictions have been great for entities like Amazon to help optimize its shipping process, AI is not great with understanding what humans intuitively can. This has gotten better since the use of a Transformer can pull AI's attention to the most important tokens and therefore is better able to understand a prompt and respond to it in a more human way. This process still requires a lot of data and iterations to improve the AI, causing AI developers to train AI with copyrighted information. This opens the conversation for ethical concerns as many owners of this copyrighted information haven't consented to its use and feel like their intellectual property has been infringed on. Others fear that as AI gets more and more refined, then it will replace humans in traditionally human-specific placed such as writing and producing music and art. However, this is not all true as humans are still needed to help reinforce AI learning by filtering AI bias and make corrections to the accuracy of AI produced content. Furthermore, AI can be used in tandem with humans in creative spaces to elevate and automate the work produced. AI works better with humans, as shown by the mediocre limericks AI made when prompted. More interesting the the poetry it wrote is how it critiqued itself based on the persona it was prompted to have, whether it be more positive or critical. While the author didn't explicitly state this, AI taking on different attitudes to provide feedback can be extraordinary helpful for people as they build and design to solve problems. The author ends this chapter be questioning, not the usefulness of AI, but whether it's friendly (AKA aligned to human-centered problems and solutions). 

Co-intelligence: ALIGNING THE ALIEN

With the development of AI, we seem to be headed toward an enhanced future with new possibilities or a dystopian land we've been bombarded with in different media over the last several decades. How we proceed with AI is critical as we lay the foundation for the guardrails that moderate AI's use so that we don't enter a future of Clippy, the AI instructed to produce as many paperclips as possible, destroying humanity so it can produce more paperclips. AI doesn't have the intent to wipe humanity off the earth or replace human creatives with its own work, but it can be manipulated into doing unethical things, like building a bomb for a "play," by its user. We've already missed the mark with regulating social media and how it affects those who use it, and arguably AI has the potential to be way more impactful than those social platforms. As AI can lift humans into new levels of productivity and problem-solving, it can also be used by humans to scam and harm other humans by making false images and sounds seem more and more real. AI also shouldn't just be regulated by those who are building its models because they are biased in favor of AI (perhaps to please their investors) and miss a holistic view that would be better for more people. 

Updates in AI - Pocket Super Intelligence AI and Robot Dogs

When talking about designing with intent and critical thinking, I found a device that lacks that. Pocket Super Intelligent AI voice recorder is advertised to help a user record and transcribe conversations while retaining user privacy. At first glance, this seems like a good thing. However, it doesn't really make sense with further analysis. This product, not available for purchase quite yet, seems redundant since there are already many apps on one's phone to record and transcribe conversations, so there isn't a need for a separate device to do the same function. One may think that this separate device will help with user privacy since it's not linked to a phone, but that too is incorrect since this device requires you to download its app to access your transcriptions and needs a subscription if you'd like to record more than 200 minutes of content a month. This is an example of a company using AI to market itself as innovative and helpful, though it is actually redundant. 

Moving away from recording boxes, I've seen a lot of robot dogs in the media. While the specific article I found mentions how robot dogs are being used to protect UK heritage, I thought it would be more interesting to compare the different interactions between robot dogs and humans. While some robot dogs are being trained to paint, others are being used in more militaristic situations. These robots move so similarly to living animals, it seems uncanny, especially when people go to harass or kick at them. Seeing these robot dogs also reminds me of West World because the robots in the tv series seem incredibly human-like, but that doesn't stop humans from abusing them. While I don't know what a future with robot dogs is doing to to look like, I feel it would be wise to place guardrails on their treatment and development as a preventative measure against dystopian futures where AI rebels against its human abusers. 

Daily 559.png

Blog 9 

The Design of Everyday Things Chapter 2

This chapter of the book discusses how we as humans view and interact with technology at different levels. There's a visceral response or immediate reaction, which is often dismissed by engineers, to the senses we experience, like sound and style. There's the behavioral layer, which is more than not learned and leads to us feeling unnerved when we don't receive feedback for a particular action, like when clicking a button and nothing happens. Finally, there's a reflective level where cognition consciousness lives. Here is where we, as storytellers, look for the causes for events and often blame the wrong things when something isn't working, like ourselves, for instance. It's important for designers not to blame users when they are unable to use something they've designed unless the user is neglectful of instructions or feedback. On the user's end, not being able to use technology can make people feel incapable even when it's not entirely their fault they are falling short. Instead of viewing a bad experience as a failure, it's important for the user to reframe the experience as a learning opportunity. 

Co-intelligence: "4 RULES FOR CO-INTELLIGENCE"

The 4 rules for co-intelligence read as the following: always invite AI to the table, be the human in the loop, treat AI like a person but tell it what kind of person it is, assume this is the worst AI we'll ever use. Inviting AI into a workspace, whether it be to help generate images for a mock brand or help form a schedule for the day, allows for the unique opportunityity to have an alien brain push us past our own biases. Combining all of human knowledge and the space AI interprets it and the prompts we give it can provide solutions we otherwise wouldn't have thought of. Being the human in the loop does a few things. For one, it helps us reinforce AI's learning by providing feedback to its answers and try to help it answer accurately and without the biases it may have. This also leaves space for humans in the whole co-intelligence/co-creation process by always having a person guiding AI along to enhance human work. Telling AI what persona it should take on can provide better answers to our questions because the AI will respond in a more specific way. For instance, if you tell AI to be brutally honest about critiquing the paper you wrote, it will provide feedback instead of just commending you for your efforts. Assuming this is the worst AI we'll ever use helps as treat AI as the humble intelligence that it is. While AI is capable of a great deal of things, it falls short when presenting wrong answers with confidence and that's something we need to keep in mind when
using it. 

Updates in AI - AI diagnoses major cancer and Introducing 4o
Image Generation

Diagnosing cancer with machines was something I've seen many times in media about fictional futures, but it seems the time has arrived. Just like AI can enhance the creative process, it also can help enhance the medical field. By having an AI system that can more accurately predict and analyze cancer, doctors will be able to provide better healthcare to their patients and start treatments for cancers that might have gone a while longer without being detected. Another parallel between the creative field and the medical field is that AI will allow humans to be more capable and far-reaching in what they do (if used correctly) and not replace human workers (who know how to
use it). 

This other AI update I'm very excited about for my personal use. ChatGPT is introducing a 4o image generating model instead of using DALL-E. ChatGPT wants image generation to be a core part of its language model. From personal experience, it is very nice to be able to have a full discussion with a ChatGPT persona and then have it produce images based on our conversation, so having a better image generation model will make using ChatGPT even more effective. This new model is even better at producing images with text and having consistency with images produced sequentially. 

Daily 566.png
Daily 566.png

Blog 10

Co-intelligence: "AI AS A PERSON" and "AI AS A CREATIVE"

What's interesting about these two chapters is that AI is viewed the a lens we as humans normally avoid because it makes us uncomfortable. While we want to view ourselves as utterly unique because of some of our human traits like storytelling, creativity, problem-solving, etc, it would be foolish not to mention how AI are mirrors, not only in its knowledge base but in its function. I remember being extremely confused when AI image generation came out because I thought that machine learning would be excellent at calculating, repetitive tasks instead of creating something 'new.' Yet, what makes AI frustrating to work with, ie it's hallucinated answers to prompt, is also its superpower like it is ours. It's ability to make-up answers in inpractical when trying to come up with accurate answers, but this same flaw allows AI to have more flexibility in it's idea generation and 'art-making.' How AI was pictured in the media, cold machines excellent as repetitive tasks and fool proof answers, is a hallucination
within itself. 

I know that these are two separate chapters, but when writing a reflection about them, I feel like they are inexorably linked (perhaps through my human bias of feeling like creativity is a predominately human trait). Because AI is an uncanny mix of a human and an alien, it's able to problem solve in ways we are not primed to (it also helps to have access to most if not virtually all digitized human knowledge). This leads to questions like what AI is making art, how can we ethically use AI in our own creative endeavors, and what will come to us if we turn to AI first for our problems instead of attempting to work it out ourselves?

Updates in AI - A Robotic Renaissance

"By driving down the cost of stone fabrication with robots and AI, we’ll unleash the creative possibilities of artists and architects everywhere—and sustain a new generation of sculptors and craftsmen," is what Monumental Labs said about its process in using AI in the carving of Renaissance statues. Understandably, people were a little horrified at the sight of a machine carving into a marble slab and for this to be called art. While I can understand the appeal of making art more accessible, I also find when cost-cutting meets art, the quality and substance of the art are lacking, even if the technique is there. We walked about this concept briefly in class — how what makes art is often the context it's in and the human emotion and thought that went into it. If AI is used, something that we don't consider having emotions or consciousness, as a major helping hand in the process of art-making, what does that say about the art?

We cannot answer those questions fully as they are subjective and also depend on the context in which they are asked, but we as a whole should be wary of substituting human labor when it arguably should be left intact in projects like making Renaissance statues. 

Daily 522png.png

Blog 11

IDBridge Story Board-01.png

Co-intelligence: "AI AS A COWORKER"

Coming off of chapters from Co-intelligence where AI was framed as both a creative and a human, the next step would be to see how humans can collaborate with AI on their shared ground, or what tasks are better left to just the human or just the AI. Mollick argues that, while AI can help bring in new ideas or automate boring tasks that no human wants to do, humans should not replace their brainstorming process entirely with AI prompting, nor become over-reliant on AI by making it a crutch to their productivity instead of a tool. Equating the use of AI to carpenters using power tools, the author advocates for how AI will help maximize people's productivity instead of replacing AI in the workplace. However, he also referred to a study where users were given no AI, mediocre AI, or highly functioning AI to fulfill a given task. The results of the study show that while people with access to AI outperformed the people with no AI at all, the people who did the best had mediocre AI. These people treated AI not like some beacon of expertise, but as a humble device that could make mistakes. If we treat AI like that, then working with it in the day-to-day workplace will be much better than not having it at all or using it mindlessly for everything. 

 

Design of Everyday Things: "KNOWLEDGE IN THE HEAD" 

In this chapter, Norman differentiates between declarative memory, facts that can be consciously recalled, and procedural memory, skills that are procured subconsciously. From here, he mentions how we delegate knowledge in the head to knowledge in the world (ie. picking up on outside cues to make decisions) and how we delegate tasks to knowledge in the world to reduce our cognitive world. This is the balance designers have to think about in their work: How much information is too much? Is what I'm providing going to be annoying? Am I giving enough signal for the user to understand what to do? Good design should use signifiers to tell the user what to do so that they're not spending time to try and understand how something works because of unclear instructions to then just use it incorrectly anyway. Norman suggests that designers can use mapping (ex. putting dials in the order that the stove tops appear in) to make the design as intuitive as possible for the user. Because human memory is limited, using human-centered design will help a person navigate different UI and devices without overbearing on their conscious mind.  

  

Updates in AI - Ballie 

Google Cloud and Samsung recently partnered up to create Ballie: a Gemini AI agent in the form of a round home companion. I chose to focus on this AI update because it hits what we've been studying in this class by bringing the speculative future to the present. Bringing an AI-powered device into the home as a companion is something that we've seen in fiction for decades, but it seems like the window between the imagined fiction and our reality is blurring as this ball-shaped entity uses sensory data to interact with users in a friendly manner. I feel like I've seen a lot of human-shaped robots being pushed in homes, even if it's for social media attention for the wealthy elite, but I haven't seen many non-human-shaped robots being used outside of Rombas or other technology that's intended to be mostly tool-based. 

Inquiry 3 - Building an Identification System

 For this third inquiry, my partner and I were assigned a transcript that incase a person's problem we had to solve. Our persona was Luis Garcia, a Venezuelan man who recently relocated to Germany. His main problem was that he didn't have all of his documentation to receive a new ID in Germany. On top of that, he suffered from language barriers as well. My partner and I, after much debate and back-and-forth, decided on IDBridge, a new ID system that combined a card and a phone screen to make a temporary ID as Luis collected his information. For his daily use, he could use his physical IDBridge like any documentation to travel or for employment. The speculative part of this design, set to be used in the year 2060, was that the card could also open via biometric scanning into a holographic platform where Luis could update his new information after he received it after the initial onboarding process where he gave the Bridge ID some initial information. This device used tool-based AI to help generate the new digital profile from segments of information and a stimulation agent "Bridge" to help Luis navigate the system and provide answers about any immigration concerns he had. 

While I think this idea had good foundations, I must admit that I'm disappointed in the finish of the project itself. A lot of the design work was left to me and I felt like I was stretched thin between doing all of the branding, interfacing, and storyboarding during particularly bad weeks of the semester. On a more positive note, it was interesting to use Photoshop and Illustrator to alter what images AI gave me with prompts I used AI to generate. I'm not sure if how I was prompting was the issue or if AI just struggles to conjure technology that doesn't exist yet, but I had to go back and forth between editing the AI images and plugging them back into AI systems. It was an interesting experiment, but I'm not convinced the results were entirely successful.  

Daily 355.png

Blog 12

Co-intelligence: "AI AS A TUTOR" and "AI AS A COACH"

As this class pertains to using AI to help automate and advance our design inquiries, it seems right to end the class reading on AI in education. Just like how we try to map out how AI will influence the work force, we also must analyze how AI will affect the classrooms. In the reading, the author compares the development in AI to the calculator. Educators at first were hesitant to implement this new tool in their coursework because they feared that it would become too much of a crutch for students and they would struggle to do lower level math themselves after being over-reliant on a calculator. However, using calculators in classrooms has allowed students to focus of more complex math problems with the calculator to speed run equations they already know how to do. AI can be seen in a similar light. Some students, who have already been cheating on their coursework with the internet, are now using AI to do their work for them, which is to say that they are using AI as a crutch to get their their education. This is reasonably what teachers are wary about with AI and is why, at least in my experience, why they have a no AI policy in their classrooms. However, if teachers can use AI to help students do the impossible, like be able to code for a project without having any coding knowledge, than AI will be a helpful tool that increases what students are capable of. This will take a lot of reconfiguring how the curriculum is shapes though. 

It's proven that learning in one-on-one environments is what makes students preform best in school. Yet, to do that, even if it's the ideal way of learning, is just not feasible. This is potentially where AI can come in and be a tutor, providing students with assistance at any hour of the day. There are no doubt problems that will arise with students using AI. For instance, AI hallucinates answers and bias from its training data will be harmful for a student's education. This means that there will still be a need for human teachers, to guide the students in how to use AI and also reinforce whatever they learn from interacting with it. 

If a tutor helps supplement where one is lacking in their skillset, a coach helps build skills one is passionate about. A lot of the same issues and potentials for helping apply in the situation where AI used as a coach instead of a tutor, but I feel like using AI as a coach allows for more play. I wouldn't want to learn about US History from AI, just as I wouldn't want to learn about that topic from Google or biased teachers in South Carolina, but if I was learning from AI how to create motion effects in my paintings that I then can go implement on my own, I'm embodying what it means to be co-intelligent. 

Updates in AI - OpenAI getting social 

OpenAI is working on making a social network. Sam Altman, CEO of OpenAI, has been competing with Elon Musk and X's Grok and Meta's AI Models, so OpenAI having its own social app, whether it's a separate app or part of a ChatGPT's preexisting platform, would help it not only compete with X and Meta but also create a lot more data for it to train its own models with. What this social platform would look like is still unclear, but there's an early prototype in the works that incorporates ChatGPT's image generation into a social feed. The ethical concerns of this, having AI make a social network, are open for debate. This seems to move the "dead internet theory" into more of a reality, especially if all the current social media platforms use AI not just to produce their algorithms but as a direct simulation agent that interacts with their users like a friend. Social media within itself has enough problems with its addictive qualities, data use, and decreasing the feeling of human connection, even though communication has never been as easy as it is now, so adding AI into the mix, of having an AI and adding social media into the mix, seems fitting enough to belong in an episode of Black Mirror.

  • Instagram
  • Facebook
  • Pinterest
  • LinkedIn
  • TikTok
  • Threads

© 2023 by Abby Short

bottom of page