What is Calibrated Quantum Mesh?
Current wave of innovation in AI has focused on structured data, or the data that can be put in tables. Multiple AI techniques like Deep Learning have shown promising results. However, such techniques find it difficult to process natural language. Calibrated Quantum Mesh (CQM), on the other hand, is an AI specifically developed for language.
CQM is designed to emulate cognitive steps of humans when they solve any problem, such as understanding a text written in natural language. It does away with the need for training data annotated by Subject Matter Experts. This is the most expensive step in training systems based on Deep Learning or similar algorithms. This is also the most time taking step. CQM approach reduces the need for unannotated data sets to a fraction, as well.
We have trained CQM based AI systems within 4-12 weeks and achieved up to 98% accuracy. This gives our clients high degree of control on their workflows, very high accuracy and a disruptive ROI. In most cases, CQM enables a fundamental change in client’s business process.
CQM works on three basic tenets:
- CQM recognizes that in nature a symbol, word, text or any variable can have multiple meanings, some more likely than others. CQM based systems consider all such possibilities (quantum states) to find the answers. For example, a phrase like “good day” could mean temperature 70F in San Francisco, but 35F in Antarctica.
- CQM recognizes everything is correlated to each other, and that each item bounds possible meanings other items could have. CQM based systems use this mesh of interconnections to reduce error. Lets take a phrase “the report”. The word “report” on it own may more likely be a verb, but combined with “the” it is more likely to be a noun. While most NLP solutions take care of phrase-drive associations like the above, CQM identifies such associations at much higher levels as well.
- CQM adds all available information sequentially to converge the mesh to a single meaning. Such a calibration process quickly identifies lacunas, and enables a very fast training leading to a very high accuracy. CQM Models use training data, contextual data, reference data, and other known facts about the problem to design these calibrations. Some times these calibrating systems, called Calibrating Data Layers, are independent modules of a CQM or another AI process.
Once set up, passing training data through untrained CQMs further defines many of the mesh’s inter-relationships. Where applicable, some of the data layers learn from such data as well. Often, new relationships and nodes are added to the mesh. The more we use CQM the smarter it becomes.
When we model a workflow using CQM, we avoid designing any blackboxes to the extent possible. The modular nature of CQM lets a sub-model act like a Calibrating Data Layer in another model. Similarly meshes can be used as nodes, or sub-meshes for other meshes. The modularization adds to CQM’s ‘experience’ as we continue to solve more problems. Each problem is but an input to the next one.
The ability of this algorithm to handle fluid, multi-state and inter-connected knowledge fits the needs of natural language. Ideas can be expressed as parts of the mesh with varying complexity. Keywords are not really important when processing natural language using CQM. Similarly, the algo can learn from non-direct corpuses. For example, we ran it over HMRC.com, Law.com, Investopedia and a proprietary glossary while helping a UK Tax Advisory.
In future we intend to use CQM for other basic cognitive processes as well, e.g. processing intonations in speech, or translating ideas back into words. Perhaps CQM can also be used to express unarticulated thoughts and process them as well.