Alex Mitrevski profile picture

Home

Research

Publications and talks

Work and projects

Teaching

Supervised projects

Resources and notes

About me

GitHub logo Google Scholar logo Linkedin logo YouTube logo

Academic Writing and AI-Based Writing Assistants

Writing is an essential element of academic research — as researchers, we have to write papers, proposals, (long) reports, code, and, of course, lots of emails. As writing takes up a lot of our time, being able to write well, and quickly, is a very important skill for any researcher.

The ability of large language models (e.g. in the form of ChatGPT) to produce well-written language is rather fascinating, so it is no surprise that many have considered using them (or have actually used them) for academic writing as well; see for example [1-3] for an overview and a discussion of some benefits and challenges. In general, large language models have many possible applications; in the context of academic writing, the goal of such models is to assist or replace the writing process, so, in this post, I will refer to them as AI-based writing assistants (AIWAs).

As I am a researcher in the field of learning-based robots, one might expect that I would be very positive about AIWAs and would fully embrace them in my work. Well, that is not really the case. In this post, I would like to actually discuss why I am rather skeptical of AIWAs, or why I value the process of writing rather than simply the end result (the written text). I will not discuss issues related to data privacy; those fall beyond the scope of what I want to cover here. I will also not touch on the issue of hallucinations by LLMs, as there are already many good discussions on this (e.g. this one).

Contents

  1. The Positive Aspects of AI-Based Writing Assistants
  2. Points of Skepticism
    1. Writing as a Thinking Support
    2. Text Ownership
    3. The Endless Pursuit of Increased Productivity
  3. Valid Uses of AI-Based Writing Assistants
  4. Concluding Remarks

The Positive Aspects of AI-Based Writing Assistants

Before I discuss my generally skeptical view of AIWAs, I would like to point out what I think are very positive aspects of using them for academic writing:

Points of Skepticism

My skepticism of using AIWAs for academic writing is based on the following arguments:

I expand on each of these points below.

Writing as a Thinking Support

At least for me, writing is an essential element that supports my thinking process, particularly in two stages of my work:

  1. While generally thinking about a topic: In the early stages of working on a problem, there is a need to consolidate one’s thoughts in order to make general progress in the work; for instance, the concrete direction that the work should take may not be fully clear yet and needs to be defined — this is usually the case after reading quite a bit of related literature on the topic and before embarking on actual work. A long time ago, I started using free writing as a method for sorting out my thoughts during this process, and I have found that to be incredibly helpful because it enables exploring different ideas in a mostly informal way. But I believe the main reason why free writing actually works is that one actually takes the time to engage in thinking about the work during the process. I simply cannot see how AIWAs can replace this in any satisfactory way; yes, one can prompt the assistant and engage in a conversation with it, but (a) a lot of the work during the process will be done by the AIWA rather than the researcher, and (b) the process is likely to be much faster, so the long-term cognitive impact is unlikely to be the same. I would like to make a parallel with fast food here; fast food is, well, food, and provides us with various necessary nutrients, but it is rarely of the same quality as a slowly cooked meal and can have negative long-term effects if overused.
  2. When describing my work: After the work is done, there is obviously a need to write about it, typically in the form of a paper. At this point, there is a need to (a) decide on how to actually present the work so that it has a clear logical flow and conveys the message that we actually want to convey, but also (b) engage with the results of the work and critically think about what they actually reveal. In my opinion, neither of these aspects can actually be done by an AIWA; they need to be done by the researchers themselves because they are the ones who did the work and have all the necessary insights to provide a proper explanation. The former aspect is where the researchers’ objective for the work should become clear; an AIWA obviously cannot know why the researchers have conducted the work. The latter aspect is the one where the researchers’ engagement with the results should shine; as they have done the work, they know best what are some caveats that should be considered when interpreting the results. At least for me, this type of writing is a bit of a creative process: I have a general idea about what to write and how, but the text evolves as I add content and discover better ways of presenting the material. Through this, I form a sort of connection with my written text; I don’t think that would be possible if I would have the text generated by an AIWA.

An astute reader may read the discussion of the second point and ask “Well, when we write collaboratively, it is anyway not the case that we write all the text ourselves, but we have to work with text written by our co-authors. How is working with text generated by an AIWA different from text written by a co-author?” That is indeed a valid question; there are, however, a couple of differences I see, namely when I write together with a co-author:

  1. I know that the co-author is involved in the work at least enough so that they can write meaningful content, but potentially knows as much as me, or even more than me about it.
  2. Even more importantly, if I have questions about the content, I can get back to them, and we can actually have a long discussion about why something is or is not in the text; with AIWAs, this is not necessarily the case (often, the provided reasons can be meaningless or too general for being of any use).

Text Ownership

The unclear text ownership is a related reason why I am skeptical about AIWAs in academic writing. Here, I am not actually talking about plagiarism (which has been demonstrated, for example as evidenced by the New York Times lawsuit against OpenAI, and which I think is the biggest potential ethical issue with AI-generated text), but about a more subtle issue, namely that of creating a connection with the written text as a writer.

As I have already mentioned before, at least for me, the process of writing is where I can fully demonstrate my engagement with the work, but is also what allows me to really claim ownership of the written text. Writing can be a relatively slow process, but it is this slowness that makes it possible to really internalise the contents and the logical flow of the text. In fact, I would say that, even years after writing some of my papers, I am still able to recall the exact logical arguments that I was trying to make there, precisely because I took the time to prepare everything carefully. In this respect, writing can be compared to proving a mathematical theorem: it can be helpful to work with a ready-made proof, but true understanding and ownership of a proof only happens once we have taken the time to construct the argument ourselves. Following this analogy, using an AIWA to produce text for us is a bit like having someone else write parts of the proof; the end result may be complete and correct, but the fact remains that we haven’t fully created the proof.

The above co-author argument can be made here as well: “When we write papers with co-authors, we anyway don’t have full ownership of the text, so how is that different from working with AI-generated text?” Another valid question, but the difference is that, in this case, we don’t claim (sole) ownership of the text written by the co-authors; as they are co-authors, we now have shared ownership of the text. But does that imply that an AIWA should become a co-author? Would that solve the problem? Not quite. Ignoring cases in which pets have been added as co-authors, when we have a co-author, it means that they can take accountability for the written text; an AIWA cannot do that.

In this context, I suppose the quantity of generated text matters. If only a short text is generated, it can be relatively simple to modify it so that one “makes it” their own; the problem becomes much more serious when large quantities of text are generated by an AIWA.

The Endless Pursuit of Increased Productivity

In modern society, productivity is perhaps one of the most coveted characteristic of an individual: the more productive we are, the more work we can do. In principle, there is nothing wrong about that; this is also one of the reasons why there are countless productivity techniques and guides out there. As AIWAs can take away some burden of the writing process, they are obviously a tool that can increase productivity. For instance, this might mean that we are able to write more papers in a shorter time, or that we can prepare project proposals quicker than we would otherwise. Sounds perfect, right?

In my opinion, not necessarily. I think this can go somewhat against the spirit of scientific research, or at least violates my understanding thereof. Research is not necessarily about finding answers fast (although there are, of course, fields such as medicine, where this can be very desirable), but is about finding the right answers after a thorough examination of different possibilities and outcomes. In addition, research should be about stepping back and considering the consequences of the work we are doing. All of this can actually benefit from less productivity at certain times; quite often, it can be very helpful to take some time to simply “sleep on things”, namely to take some distance from the work and then get back to it with a fresh mind. AIWAs can potentially disrupt this process, as they can encourage settling in on a solution quickly; taking a step back may be seen as unnecessary when using an AIWA as a productivity tool, as one can quickly finish a writing task and immediately move on to another one. But, due to the reduced time investment, this can mean that important considerations were actually not made in the process; the temptation to move on to the next thing can be overwhelming.

I do not consider myself an exceptional researcher by any means. I simply believe I am good at what I do and have been known to be quite productive when it comes to writing; however, as alluded to in the previous sections, my writing productivity has, perhaps counterintuitively, typically come through periods of lower productivity, where I have taken some time to think about how to achieve the goals of my work or have gone through an iterative process of writing (text or code) and actively examining the consequences of what I have written. While there are undoubtedly cases where I would have benefited from a productivity boost, in most cases, slower pace has been exactly what I had needed.

Concluding Remarks

In this post, I have shared my critical view of the adoption of AI-based writing assistants (AIWAs) for academic writing, namely why I currently do not consider them to be a viable option for my writing activities. Even though there are undoubtedly some positive aspects of using AIWAs, particularly when observing them as language assistants, I see their use in academic writing as generally undesirable. To support this view, I focused my discussion on three main aspects, namely I discussed writing as a tool that supports the thinking process, the challenges of using AIWAs with respect to text ownership, as well as the necessity to sometimes be less productive (in the short term) for increased long-term productivity. Despite all this, it has to be mentioned that AIWAs have undoubtedly already had an effect on academic writing; as a matter of fact, a lot of the effects will likely be visible only in the long term. I might be in the minority with my views, but I suppose that remains to be seen.

References

  1. Y. K. Dwivedi et al, “Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy,” International Journal of Information Management, vol. 71, pp. 102642: 1-63., Aug. 2023.
  2. B. Almarie et al, “Editorial - The Use of Large Language Models in Science: Opportunities and Challenges,” Principles and Practice of Clinical Research, vol. 9, no. 1, pp. 1-4, July 2023.
  3. B. Mittelstadt, S. Wachter, and C. Russell , “To protect science, we must use LLMs as zero-shot translators,” Nature Human Behaviour, vol. 7, pp. 1830-1832, 2023.