Old Sol is very intense here in the San Luis Valley because of the altitude and the zero humidity. You can be walking along on a cold, calm day, legitimately cold, 15 F/-9 C, and, if you are facing the sun or the sun is on your back, you can find yourself overheated in your down jacket. Yesterday wasn’t that day. It was warmer than 15, but no sun, and during my saunter with Bear, I was cold. It took a while to warm up, too, since the point of the walk was to let Bear have a chance to smell things and toddle along at her own speed. It snowed off and on all day, but it didn’t amount to much.
Today the sun is shining, and the freezing fog has touched the trees. The air shimmers with ice crystals. It’s 7 F/-13 C. BUT a few years ago on this day it was -17 F/-27 C so the warm spell of La Niña lingers.
I keep thinking that it might be time for an adventure in the world, but I don’t know where or what. I am held down by the roommates.
A woman I used to work with at the College of Business at SDSU offered me a job yesterday of editing an academic paper she’s writing. She did a Fulbright a year or two ago. She’s the only person from that life with whom I have any contact. She’s an awesome, dynamic, warm, human, funny person and I liked her a lot. I took the idea out with Bear to think it over and when I came back I had my answer. As Bear investigated a complicated concatenation of tracks, I saw that I’m not the person to edit an academic paper, particularly in a field I don’t know. Her field is accounting. That’s almost like asking me to describe life in New York City. And editing? It’s an enormous word — it means so many different things to people depending on what they think it is. I don’t even know what it is half the time. To some people it means proofreading, to some it means making something sound better, to some it means critique, to some it means making sure the guy rowing the galleon isn’t wearing an Apple Watch.
I’d take on the project if I were still in San Diego, and we could talk about the project over coffee. The whole thing would work better if we could show each other things in the manuscript. Still, I feel really good that she thought of me.
Yesterday, after I got the email asking me to do a survey about using ChatGPT in a communication class, I gave ChatGPT a challenge. The bot couldn’t handle it and THAT turned out to be very interesting. In business communication there are basically two kinds of messages, depending on the audience, always. It’s good news or bad news for the audience, not the writer/business.
One is a “bad news message” a message where (in a general sense) a business will have to say “No” to a customer. It’s bad news for the customer because he/she doesn’t get what he/she wants. The challenge is to keep the customer’s goodwill (and avoid a lawsuit?) while saying “No.” The structure of that message is complicated for students to understand even though it’s very simple. It’s like breaking up with someone — you go to a nice restaurant, compliment them on something, then say “it’s not you, it’s me.” That’s it more or less. The big rule is that you don’t say “No!” in the beginning. I asked ChatGPT to give me instructions for writing a bad news message and then demonstrate.
Basically, a correct answer would be 1) goodwill, 2) policy, 3) refusal, 4) [optional] offer of some kind of compensation (discount on a future order), 5) more goodwill — thanks for contacting us, etc. Simply, thank them for their message, appreciate their concern, explain company policy, tell them not to hesitate to get in touch if they have further questions. The “no” might not even be stated explicitly.
The bot gave incorrect instructions for writing a bad news message, and then wrote an example message using the correct structure. I asked it why it did that. It couldn’t handle the question or see what it had done. If I were teaching business communication now, I would use that in class.
I kind of pushed the bot, and it explained its limitations to me. I knew them already, but I wondered how it would “defend” itself. It didn’t. It couldn’t see what it couldn’t see and admitted it. It doesn’t have the analytical skills or what we might term “self-awareness” needed to see the contradiction between its instructions for writing a bad news message and the message it actually wrote.
To me this says that the bot can get the right answer but not know why or how it got it. To me this means, as far as education, right answers in and of themselves need to be de-emphasized. The process and the reasons behind it, in the case of a bad news message meaning acknowledging the humanity of the person who will be disappointed might be more worthy of an exam question. Could a bot learn to give the right answer to THAT question? Yeah. I can see the bot pushing educators in a very different direction and I, personally, hope that happens. It makes me think of my best ever business communication class in which we met for four hours three days a week in a wonderful room (I called it “the bridge” after Star Trek) and everyone did their work right then and there on laptops, working together and working with me. Everyone learned so much. It’s the only bus comm class I ever took out for pizza at the end of the semester. The bot could push education toward more interactive learning and a different way to grade.
The email came from a group of university instructors who are writing a paper. I’m looking forward to hearing the results of the survey and reading the paper. I know that education is only ONE place where AI will have — and is having — an impact. Since I’ve been playing around with this, I’ve seen how much it is already involved in my life. Yesterday I filed my taxes. I was helped by what I can only call a “tax bot.” Considering how absolutely punctilious and literal Mr. Taxbot is by its “nature,” I was pretty happy with it. So much better than the old days when I had to fill out my tax form myself. One year when I had had a hard time financially and the feds still wanted me to pay, I wrote, “‘You can’t get blood from a turnip. Send me a bill.” I wrote that in red ink. There are times when being human is a liability. Taxbot just asks me to fill in blanks then goes through everything with its utter lack of imagination to see if I’ve done it. It’s programs to have a “friendly tone” which is kind of annoying but it’s better than a hostile tone. And, one good thing about Taxbot; it doesn’t lecture.
Another Weather Report and Stuff from the San Luis Valley

‘It couldn’t see what it couldn’t see and admitted it.’–I’m impressed. So many people will not/cannot do that.
Did you read on the news this morning that ChatGBT passed the Wharton MBA exam with a B/B- grade? Scary.
I don’t find that scary — kind of revelatory. The bot’s designed to regurgitate existing material. I hope this means exams will be start being better designed which would be good.
And it’s true. Most of my students wouldn’t cop to not understanding what they didn’t understand. The Chatbot said, “I’m learning all the time to improve my algorithms.”
Scary because the professors are worried about cheating.
Ahhh… That’s a thing professors have always had to deal with. I (personally) don’t think it’s the professor’s problem but that was never a popular point of view. 😀
I saw a headline this am that a prof at Wharton gave the chatgbt his mba final and it passed
My son once asked me to edit a music theory paper. I told him I knew nothing about Schenkerian Analysis and asked if I were the best choice to edit it. He said he wanted to know that his argument made sense, that the imagery worked, that the language made sense. He said I didn’t need to know Schenker’s work to do that – I needed to know writing and logical argument. It was like when I helped him with calculus. I didn’t know the math but I knew problem-solving. What do you need to know? What do you know? What is relevant in the stuff that you know? Throw out the irrelevant and do the calculations to get there from here.
I forgot to mention – the hoarfrost is beautiful!
It really is!
It’s not just about the editing; there’s the interpersonal interaction. I’m just not that person any more.
A very interesting experiment! I guess an AI that doesn’t take itself too seriously is a good thing!!
AI has no ego. That makes it very refreshing.