Week 7 - Sunday, 22nd January

What does Intel Arc mean for the GPU market?

Written by Muhammad Shah

Recently, Intel has entered the GPU market with its Intel Arc series of graphics processing units (GPUs). These GPUs are designed to provide high performance and energy efficiency for a variety of applications, from gaming and video editing to machine learning and scientific research, but at a digestible price point.

The Intel Arc GPUs are also built on the Xe architecture, which is designed to scale from the low-power mobile devices to the high-performance data centre. The Xe architecture allows for the use of a common software stack across all devices, making it easy for developers to create applications that can run on a wide range of hardware. They target the mid-range market for cost effective PC builds in the face of companies such as Nvidia and AMD proposing extortionate amounts of money for their mid-range GPUs, including the 4070 Ti which has launched at an eye watering £800. Whilst the Arc line up at around £380 may not offer comparable performance, they are a tier above in price to performance, with claims of up to 3080 levels of performance.

Unfortunately, the relatively new arrival of these GPUs has meant that they have been plagued with issues so far. Amongst these, by far the biggest issue is incompatible games, with Intel’s GPU’s only supporting DirectX 11 and 12, (with 11 having already faced severe performance issues), many older titles which rely on DirectX 9 face a crippling performance hit. Similar issues have also been seen in popular games such as Minecraft, with users reporting problems when loading textures, causing the game to look terrible.

But putting aside the current, looking into the future, should Intel manage to iron out the main issues around the Arc line up with upcoming driver updates, their market penetration could provide a welcome respite from the ever increasing costs in the fight between the two existing GPU giants, providing consumers with a viable, low-cost alternative.

Programming News

Written by Joel Swedensky

ChatGPT is an AI chatbot released by OpenAI in late November last year, based off their model GPT-3 used for natural language processing. It has sparked huge interest as its key focus is on generating extremely human-like natural sounding responses - and it does this very well. (Try it out yourself and see).

Already, people have been using the site for many things - mostly fun. Realistically, practical applications should use the more general GPT-3 model to get better results, so the AI chatbot only serves to be used for fun. Since it's so good at generating natural-sounding language, people have asked it hilarious prompts such as a debate between Donald and Daffy Duck in the style of Shakespeare. The AI clearly understands humour, to an extent, such as when asked to write a 3-line poem. It has also been used to generate clickbaity titles and new colours...

Crucially, the chatbot does not have access to the internet. While this is generally a good idea to allow the bot to run in a controlled environment (and avoid taking over the world), it means it has no way to verify information; if you ask it a question about information after 2021 (when it was trained), it does not know at all, stating

my knowledge was cut off in 2021, and I am not able to browse the web or access new information. Can I help you with anything else?
Generally, if you ask it for a fact, however, it is often extremely overconfident in its response, even if it's not correct. Like claiming a banana is bigger than a cat. Even though a cat has bigger dimensions. Really. This is potentially dangerous as it is in some ways better at lying than humans, and can be used to generate trustworthy sounding text which can be used to mislead.

So much is this problem that stackoverflow - the (in)famous programming Q&A site - banned the AI, claiming that
Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.
People, potentially well-meaning, had been looking to help out on questions extremely fast - by asking ChatGPT - and copy-pasting the response without checking its truth, leading to misinformation spreading through the site.

While the ban is great in theory, it's not that easy to detect - or wasn't, until GPTZero arrived at the beginning of the year. It has already received over 7 million views and can detect ChatGPT's writing style - using the same technology that ChatGPT is built upon. It measures "perplexity" of the text - how random it is. Humans tend to have a much higher score of perplexity, and the tool gives you statistical information on the perplexity of sentences in the text before giving its verdict. This could be very useful for stopping ai-plagiarism in education. "Plagiarism" of AI has been a controversial subject recently, specifically pertaining to AI art, such as Dall-E 2 (also by OpenAI). While the art looks a lot fantastic, several questions are under debate - whether the prompter or the AI made it, for a start; and also the ethics of the training data used for the AI: should all artists whose images were trained on be credited?

ChatGPT is a showcase of great advancements in natural language AI, which has come extremely far from when Alan Turing created the Imitation Game (aka Turing Test), to Eugene Goostman passing it and developing much more since then. However, many ethical issues arise from it that do need to be adressed.

Thanks for reading! 😊

Especially at the beginning of this adventure, we thank you deeply for reading and supporting us in our quest to deliver the best tech news!

Please give us a follow at https://twitter.com/pyxlenews

Also, do share to as many people as possible by sending them the link for the website: https://news.pyxle.ml

Thanks again,

The Pyxle News Team 📧