Week 6 - Sunday, 15th January

MagSafe to become the new QI 2 Wireless charging standard.

Written by Muhammad Shah

Back in October of 2020, Apple announced the iPhone’s version of MagSafe, a new wireless charging technology which aimed to reduce the hassle involved with traditional wireless charging. Prior to MagSafe, users had to carefully align their phones with the wireless charging mat in order to charge, which took time and often meant that misaligned phones would simply not charge. In addition, this limited the wireless charging speed to around 7.5W due to the inefficiency of the design and non-optimal charging position, contributing to problems such as overheating whilst charging.

MagSafe successfully fixed most of these issues, with users now being able to slap the charger to the back of their phone with an array of magnets helping to align the charger. This also dramatically improved the efficiency of the charging process, which now means that MagSafe users can charge at up to 15W.

Now, in a quiet press release, Apple have effectively “opensourced” MagSafe and agreed to allow it to be used in the next-generation QI 2 (pronounced Chi 2) wireless charging standard. This means that even Android phones with the new technology can use MagSafe chargers with the same benefits that iPhone users enjoy. In addition, we can expect to see an influx of new MagSafe compatible accessories for both Android and iPhones.

But why has Apple suddenly decided to be the industry leader in this? Well, from personal opinion, I believe that this is in relation to the new EU law which wants all devices to have a common USB-C port or some form of wireless charging. It could be that Apple is pushing this new standard as a means to get ahead of the game, potentially in preparation for a port-less iPhone, in order to circumvent EU laws. Whilst it is unlikely that this will be ready for the 2023 iPhone 15, we would very well see a port-less iPhone 16 or 17.

Programming News

Written by Joel Swedensky

ChatGPT is an AI chatbot released by OpenAI in late November last year, based off their model GPT-3 used for natural language processing. It has sparked huge interest as its key focus is on generating extremely human-like natural sounding responses - and it does this very well. (Try it out yourself and see).

Already, people have been using the site for many things - mostly fun. Realistically, practical applications should use the more general GPT-3 model to get better results, so the AI chatbot only serves to be used for fun. Since it's so good at generating natural-sounding language, people have asked it hilarious prompts such as a debate between Donald and Daffy Duck in the style of Shakespeare. The AI clearly understands humour, to an extent, such as when asked to write a 3-line poem. It has also been used to generate clickbaity titles and new colours...

Crucially, the chatbot does not have access to the internet. While this is generally a good idea to allow the bot to run in a controlled environment (and avoid taking over the world), it means it has no way to verify information; if you ask it a question about information after 2021 (when it was trained), it does not know at all, stating

my knowledge was cut off in 2021, and I am not able to browse the web or access new information. Can I help you with anything else?
Generally, if you ask it for a fact, however, it is often extremely overconfident in its response, even if it's not correct. Like claiming a banana is bigger than a cat. Even though a cat has bigger dimensions. Really. This is potentially dangerous as it is in some ways better at lying than humans, and can be used to generate trustworthy sounding text which can be used to mislead.

So much is this problem that stackoverflow - the (in)famous programming Q&A site - banned the AI, claiming that
Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.
People, potentially well-meaning, had been looking to help out on questions extremely fast - by asking ChatGPT - and copy-pasting the response without checking its truth, leading to misinformation spreading through the site.

While the ban is great in theory, it's not that easy to detect - or wasn't, until GPTZero arrived at the beginning of the year. It has already received over 7 million views and can detect ChatGPT's writing style - using the same technology that ChatGPT is built upon. It measures "perplexity" of the text - how random it is. Humans tend to have a much higher score of perplexity, and the tool gives you statistical information on the perplexity of sentences in the text before giving its verdict. This could be very useful for stopping ai-plagiarism in education. "Plagiarism" of AI has been a controversial subject recently, specifically pertaining to AI art, such as Dall-E 2 (also by OpenAI). While the art looks a lot fantastic, several questions are under debate - whether the prompter or the AI made it, for a start; and also the ethics of the training data used for the AI: should all artists whose images were trained on be credited?

ChatGPT is a showcase of great advancements in natural language AI, which has come extremely far from when Alan Turing created the Imitation Game (aka Turing Test), to Eugene Goostman passing it and developing much more since then. However, many ethical issues arise from it that do need to be adressed.

Thanks for reading! 😊

Especially at the beginning of this adventure, we thank you deeply for reading and supporting us in our quest to deliver the best tech news!

Please give us a follow at https://twitter.com/pyxlenews

Also, do share to as many people as possible by sending them the link for the website: https://news.pyxle.ml

Thanks again,

The Pyxle News Team 📧