Unlock a World of Free Content: Books, Music, Videos & More Await!

The AI & Quantum Computing Chronicle

Description
This channel covers Artificial Intelligence, Data Science, Machine Learning & Quantum Computing to help you extract invaluable information through our posts.

For any suggestion/question:
Twitter: @ItalyHighTech/@KevinClarity
Telegram: @vzocca/@kcorella
Advertising
We recommend to visit

Official Telegram Channel by Sarkari Result SarkariResult.Com
Welcome to this official Channel of Sarkari Result SarkariResult.Com - On this page you will get all the updated information on Sarkari Result website from time to time.

Last updated 11ย hours ago

๐Ÿ‘ŒOnly Current Affairs English & Hindi Medium.
Contact @GKGSAdminBot
Channel Link- https://t.me/+wytqxfcVInNjN2E1

By Chandan Kr Sah
Email- ChandanKrSahIN@gmail.com

Must Subscribe Us On YouTube - https://youtube.com/channel/UCuxj11YwYKYRJSgtfYJbKiw

Last updated 1ย year, 3ย months ago

๐Ÿ“Œ YouTube channel link :-
https://youtube.com/c/RojgarwithAnkit

๐Ÿฅ‡ telegram channel - @rojgaarwithankit

๐Ÿฅˆ telegram channel - @RojgarwithankitRailway

๐Ÿ“Œ RWA helpline number - 9818489147

Last updated 1ย year, 2ย months ago

2ย weeks, 6ย days ago

What Is Spacetime?
General relativity predicts that matter falling into a black hole becomes compressed without limit as it approaches the centerโ€”a mathematical cul-de-sac called a singularity. Theorists cannot extrapolate the trajectory of an object beyond the singularity; its time line ends there. Even to speak of โ€œthereโ€ is problematic because the very spacetime that would define the location of the singularity ceases to exist. Researchers hope that quantum theory could focus a microscope on that point and track what becomes of the material that falls in. https://www.nature.com/articles/d41586-018-05095-z

Nature

What Is Spacetime?

Nature - Physicists believe that at the tiniest scales, space emerges from quanta. What might these building blocks look like?

What Is Spacetime?
3ย weeks, 6ย days ago

In this article, the authors have proposed an adaptation of two cutting-edge concepts in the field of deep learning, namely the Transformer architecture and ๐ญ๐ก๐ž ๐ซ๐ž๐œ๐ž๐ง๐ญ ๐Š๐จ๐ฅ๐ฆ๐จ๐ ๐จ๐ซ๐จ๐ฏ-๐€๐ซ๐ง๐จ๐ฅ๐ ๐ง๐ž๐ญ๐ฐ๐จ๐ซ๐ค through its version. ๐“๐ก๐ž ๐ซ๐ž๐ฌ๐ฎ๐ฅ๐ญ๐ฌ ๐จ๐›๐ญ๐š๐ข๐ง๐ž๐ ๐ฌ๐ก๐จ๐ฐ ๐š ๐ซ๐ž๐š๐ฅ ๐ข๐ฆ๐ฉ๐ซ๐จ๐ฏ๐ž๐ฆ๐ž๐ง๐ญ ๐จ๐ฏ๐ž๐ซ ๐ฌ๐ญ๐š๐ง๐๐š๐ซ๐ ๐ฆ๐ž๐ญ๐ก๐จ๐๐ฌ ๐š๐ง๐ ๐๐ž๐ฆ๐จ๐ง๐ฌ๐ญ๐ซ๐š๐ญ๐ž ๐ญ๐ก๐š๐ญ ๐“๐Š๐€๐ ๐ข๐ฌ ๐ฌ๐จ๐ฆ๐ž๐ญ๐ก๐ข๐ง๐  ๐ญ๐จ ๐œ๐จ๐ง๐ฌ๐ข๐๐ž๐ซ ๐ž๐ฏ๐ž๐ง ๐ฐ๐ข๐ญ๐ก๐ข๐ง ๐ญ๐ก๐ž ๐“๐ซ๐š๐ง๐ฌ๐Ÿ๐จ๐ซ๐ฆ๐ž๐ซ ๐š๐ซ๐œ๐ก๐ข๐ญ๐ž๐œ๐ญ๐ฎ๐ซ๐ž. With their specific learning task, different from that used for the Temporal Fusion Transformer, ๐ญ๐ก๐ž๐ฒ ๐ฌ๐ก๐จ๐ฐ ๐ญ๐ก๐š๐ญ ๐ญ๐ก๐ž ๐š๐ซ๐œ๐ก๐ข๐ญ๐ž๐œ๐ญ๐ฎ๐ซ๐ž ๐ง๐ž๐ž๐๐ฌ ๐ญ๐จ ๐›๐ž ๐ฆ๐จ๐๐ข๐Ÿ๐ข๐ž๐ ๐ญ๐š๐ฌ๐ค ๐›๐ฒ ๐ญ๐š๐ฌ๐ค ๐ญ๐จ ๐š๐œ๐ก๐ข๐ž๐ฏ๐ž ๐›๐ž๐ญ๐ญ๐ž๐ซ ๐ฉ๐ž๐ซ๐Ÿ๐จ๐ซ๐ฆ๐š๐ง๐œ๐ž. Their approach clearly outperforms the standard method - https://arxiv.org/pdf/2406.02486v2

3ย weeks, 6ย days ago

Despite the inherent complexity and challenges that neuroscientists must deal with while addressing neuronal classification, numerous reasons exist for interest in this topic. Some brain diseases affect specific cell types. Neuron morphology studies may lead to the identification of genes to target for specific cell morphologies and the functions linked to them. A neuron undergoes different stages of development before acquiring its ultimate structure and function, which must be understood to identify new markers, marker combinations, or mediators of developmental choices. Understanding neuron morphology represents the basis of the modeling effort and the data-driven modeling approach for studying the impact of a cellโ€™s morphology on its electrical behavior and function as well as on the network dynamics that the cell belongs to. https://www.nature.com/articles/s41598-023-38558-z

Nature

Application of quantum machine learning using quantum kernel algorithms on multiclass neuron M-type classification

Scientific Reports - Application of quantum machine learning using quantum kernel algorithms on multiclass neuron M-type classification

Despite the inherent complexity and challenges that neuroscientists must deal with while addressing neuronal classification, numerous reasons exist for interest โ€ฆ
2ย months, 2ย weeks ago

Typically, when engineers build machine learning models out of neural networks composed of units of computation called artificial neurons they tend to stop the training at a certain point, called the overfitting regime. This is when the network basically begins memorizing its training data and often wonโ€™t generalize to new, unseen information. But when the OpenAI team accidentally trained a small network way beyond this point, it seemed to develop an understanding of the problem that went beyond simply memorizing it could suddenly ace any test data.

The researchers named the phenomenon โ€œgrokking,โ€ a term coined by science-fiction author Robert A. Heinlein to mean understanding something โ€œso thoroughly that the observer becomes a part of the process being observed.โ€ The overtrained neural network, designed to perform certain mathematical operations, had learned the general structure of the numbers and internalized the result. It had grokked and become the solution. https://www.quantamagazine.org/how-do-machines-grok-data-20240412/

Quanta Magazine

How Do Machines โ€˜Grokโ€™ Data?

By apparently overtraining them, researchers have seen neural networks discover novel solutions to problems.

Typically, when engineers build machine learning models out of neural networks composed of units of computation called artificial neurons they โ€ฆ
2ย months, 3ย weeks ago

Chaotic dynamics has been shown in the dynamics of neurons and neural networks, in experimental data and numerical simulations. Theoretical studies have proposed an underlying role of chaos in neural systems. Nevertheless, whether chaotic neural oscillators make a significant contribution to network behaviour and whether the dynamical richness of neural networks is sensitive to the dynamics of isolated neurons, still remain open questions. We investigated synchronization transitions in heterogeneous neural networks of neurons connected by electrical coupling in a small world topology. The nodes in our model are oscillatory neurons that โ€“ when isolated โ€“ can exhibit either chaotic or non-chaotic behaviour, depending on conductance parameters. https://www.nature.com/articles/s41598-018-26730-9

Nature

Synchronization transition in neuronal networks composed of chaotic or non-chaotic oscillators

Scientific Reports - Synchronization transition in neuronal networks composed of chaotic or non-chaotic oscillators

Chaotic dynamics has been shown in the dynamics of neurons and neural networks, in experimental data and numerical simulations. Theoretical โ€ฆ
2ย months, 3ย weeks ago

The differences between human and machine learningโ€”when it comes to language (as well as other domains)โ€”are stark. While LLMs are introduced to and trained with trillions of words of text, human language โ€œtrainingโ€ happens at a much slower rate. To illustrate, a human infant or child hearsโ€”from parents, teachers, siblings, friends and their surroundingsโ€”an average of roughly 20,000 words a day (e.g., Gilkerson et al., 2017; Hart and Risley, 2003). So, in its first five years a child might be exposed toโ€”or โ€œtrainedโ€ withโ€”some 36.5 million words. By comparison, LLMs are trained with trillions of tokens within a short time interval of weeks or months. The inputs differ radically in terms of quantity (sheer amount), but also in terms of their quality

But can an LLMโ€”or any prediction-oriented cognitive AIโ€”truly generate some form of new knowledge? We do not believe they can. One way to think about this is that an LLM could be said to have โ€œWiki-level knowledgeโ€ on varied topics in the sense that these forms of AI can summarize, represent, and mirror the words (and associated ideas) it has encountered in myriad different and new ways. On any given topic (if sufficiently represented in the training data), an LLM can generate indefinite numbers of coherent, fluent, and well-written Wikipedia articles. But just as a subject-matter expert is unlikely to learn anything new about their specialty from a Wikipedia article within their domain, so an LLM is highly unlikely to somehow bootstrap knowledge beyond the combinatorial possibilities of the data and word associations it has encountered in the past.

AI is anchored on data-driven prediction. We argue that AIโ€™s data and prediction-orientation is an incomplete view of human cognition. While we grant that there are some parallels between AI and human cognitionโ€”as a (broad) form of information processingโ€”we focus on key differences. We specifically emphasize the forward-looking nature of human cognition and how theory-based causal logic enables humans to intervene in the world, to engage in directed experimentation, and to problem solve. -
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4737265

Ssrn

Theory Is All You Need: AI, Human Cognition, and Decision Making

Artificial intelligence (AI) now matches or outperforms human intelligence in an astonishing array of games, tests, and other cognitive tasks that involve high-

2ย months, 3ย weeks ago

Diffusion Models From Scratch. Here, we'll cover the derivations from scratch to provide a rigorous understanding of the core ideas behind diffusion. What assumptions are we making? What properties arise as a result? - https://www.tonyduan.com/diffusion/index.html

2ย months, 3ย weeks ago

Unraveling the Wonders of Hebb's Rule: Decoding Neural Connections

The Hebb's Rule stands as a fundamental principle in neuroscience, illuminating the mechanisms through which our brains learn and adapt. This rule provides insights into the connections in our brain and how we learn and remember.

The Hebb's Rule is often summarised as "cells that fire together wire together", it underscores the significance of synchronized neural activity in strengthening connections between neurons. It serves as the blueprint for understanding how experiences shape our neural pathways.

How do humans (and animals) recognise objects even when only partially visible? Why do we suddenly remember past events or people when we find ourselves in the locations where these events occurred?

How do we recognise from a simple sketch the image of a chair or a house?

How can a small child, even before being able to walk, be able to recognise a chair regardless of its size or colour and not get confused? -
https://open.substack.com/pub/vzocca/p/learning-by-association-the-hebbs

The Intelligent Blog

Learning by association: the Hebb's rule.

Learning by associations Children learn quickly by making associations. Then they remember through those associations. A child, even a small child who may not yet been able to walk, can recognise a chair whether it is large or small, or regardless of itsโ€ฆ

Unraveling the Wonders of Hebb's Rule: Decoding Neural Connections
3ย months, 1ย week ago

In todayโ€™s stock market, staying informed about news and events is crucial for making strategic decisions. Recognizing the impact of sentiment on market trends is essential to adjust strategies accordingly - https://www.insightbig.com/post/stock-market-sentiment-prediction-with-openai-and-python

InsightBig

Post | InsightBig

InsightBig is a blog covering all new latest updates and innovations in the field of Finance and Technology

4ย months, 3ย weeks ago

The possibility follows from the quantum phenomenon known as superposition, where particles maintain all possible realities simultaneously until the moment theyโ€™re measured. In labs in Austria, China, Australia and elsewhere, physicists observe indefinite causal order by putting a particle of light (called a photon) in a superposition of two states. They then subject one branch of the superposition to process A followed by process B, and subject the other branch to B followed by A. In this procedure, known as the quantum switch, Aโ€™s outcome influences what happens in B, and vice versa; the photon experiences both causal orders simultaneously.
https://www.quantamagazine.org/quantum-mischief-rewrites-the-laws-of-cause-and-effect-20210311/

Quanta Magazine

Quantum Mischief Rewrites the Laws of Cause and Effect

Spurred on by quantum experiments that scramble the ordering of causes and their effects, some physicists are figuring out how to abandon causality altogether.

The possibility follows from the quantum phenomenon known as superposition, where particles maintain all possible realities simultaneously until the moment โ€ฆ
We recommend to visit

Official Telegram Channel by Sarkari Result SarkariResult.Com
Welcome to this official Channel of Sarkari Result SarkariResult.Com - On this page you will get all the updated information on Sarkari Result website from time to time.

Last updated 11ย hours ago

๐Ÿ‘ŒOnly Current Affairs English & Hindi Medium.
Contact @GKGSAdminBot
Channel Link- https://t.me/+wytqxfcVInNjN2E1

By Chandan Kr Sah
Email- ChandanKrSahIN@gmail.com

Must Subscribe Us On YouTube - https://youtube.com/channel/UCuxj11YwYKYRJSgtfYJbKiw

Last updated 1ย year, 3ย months ago

๐Ÿ“Œ YouTube channel link :-
https://youtube.com/c/RojgarwithAnkit

๐Ÿฅ‡ telegram channel - @rojgaarwithankit

๐Ÿฅˆ telegram channel - @RojgarwithankitRailway

๐Ÿ“Œ RWA helpline number - 9818489147

Last updated 1ย year, 2ย months ago