Author

Topic: Evaluating cryptography and solving the problem of AI safety simultaneously. (Read 186 times)

member
Activity: 691
Merit: 51
So what are your current experiences and your goals or concrete ideas? Do you have any specific project in mind or are you generally interested in discussing AI and its use in the Bitcoin blockchain field?

I'm not sure if I might have overlooked any questions. What exactly was your question?
Please go back and reread and understand my posts. Only after that can we have a conversation.

-Joseph Van Name Ph.D.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
So what are your current experiences and your goals or concrete ideas? Do you have any specific project in mind or are you generally interested in discussing AI and its use in the Bitcoin blockchain field?

I'm not sure if I might have overlooked any questions. What exactly was your question?
member
Activity: 691
Merit: 51
Your insights into the challenges of AI interpretability and the need for advancements in cryptographic algorithms are thought-provoking though your dedication to developing more interpretable AI models like LSRDRs is commendable. Integrating mathematical techniques into AI analysis of cryptosystems seems promising. Noone knows which direction we're steering but it's crucial to foster collaboration between AI and cryptography communities to tackle these complex issues effectively.

Just my 2 sats

I appreciate the feedback. Sometimes to develop better cryptocurrency technologies, we need to invent new AI systems to evaluate their cryptographic security. And to get the best performance, an AI algorithm that always returns the same object after training means that we can trust the AI algorithm. Now, let me generalize LSRDRs to algorithms that behave more like neural networks.

-Joseph Van Name Ph.D.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
Your insights into the challenges of AI interpretability and the need for advancements in cryptographic algorithms are thought-provoking though your dedication to developing more interpretable AI models like LSRDRs is commendable. Integrating mathematical techniques into AI analysis of cryptosystems seems promising. Noone knows which direction we're steering but it's crucial to foster collaboration between AI and cryptography communities to tackle these complex issues effectively.

Just my 2 sats
member
Activity: 691
Merit: 51
Part of the secret to building a safe AI based Neural Network is to have a law or foundation an AI system can base all its functions or activities on. The foundation has to be good or free from evil, and should have networks of specialized databases/sources with different roles for solving different problems or for making an AI function properly... the databases do not derail from the law/foundation.   I would expect an AI creator to program an AI to depend more on the sources with most accurate, consistent, safe solutions to specific issues if they really care about safe AI. Once the right source is found the AI constantly learns from it and also remember to give credit.
The problem of AI systems that were intentionally designed to cause harm is an issue. But an AI system may still be dangerous even if it were not intentionally designed to cause harm. We currently do not have good understanding of the inner workings of AI systems, so it is a good idea to first understand the inner workings of AI systems so that we can design them well. Part of the design for these AI systems must include interpretability so that people can observe not only the AI's training data, loss/fitness function, and training data, but so that people can observe and make sense of its inner workings. If an AI has bad processes deep inside, then we need to be able to know about these processes and correct them either through retraining or ablation, and we need to be able to detect these bad inner processes before they result in bad outputs.

I have developed the notion of an LSRDR and some generalizations of this notion to solve cryptographic problems, but LSRDRs may also be used to solve problems related to AI safety and interpretability. Since nobody else is working on this, one should not be surprised that LSRDRs cannot match the performance of neural networks, but I am working on this, and LSRDRs can solve some problems that neural networks have trouble with. For example, can you design a neural network to find a largest clique in a graph? Perhaps it is possible, but LSRDRs can probably do this more efficiently (but it seems like simulated annealing may outperform LSRDRs for the clique problem), and unlike neural networks, LSRDRs can solve the clique problem in some graphs just by looking at the single graph without creating many graphs to train itself with. I hope that generalizations of LSRDR can be soon used to solve more machine learning tasks so that they can better compete with neural networks, but at the very least, we can probably using something like LSRDRs to interpret neural networks. Neural networks as they stand today are quite bad in terms of interpretability, so we can either use more interpretable mathematical systems or we can improve our interpretability tools.

Happy New Years,

-Joseph Van Name Ph.D.
Ucy
sr. member
Activity: 2674
Merit: 403
Compare rates on different exchanges & swap.
Part of the secret to building a safe AI based Neural Network is to have a law or foundation an AI system can base all its functions or activities on. The foundation has to be good or free from evil, and should have networks of specialized databases/sources with different roles for solving different problems or for making an AI function properly... the databases do not derail from the law/foundation.   I would expect an AI creator to program an AI to depend more on the sources with most accurate, consistent, safe solutions to specific issues if they really care about safe AI. Once the right source is found the AI constantly learns from it and also remember to give credit.
member
Activity: 691
Merit: 51
We have a couple of problems that need to be sorted out.

If you look at artificial intelligence, you would notice that even the experts have a very poor understanding of the inner workings of AI systems. In other words, our current AI systems are uninterpretable. Since our AI systems are uninterpretable, it will be difficult to ensure that we will be able to control these AI systems, predict their behavior, make sure that they do not cause an unexpected disaster. One way of going about solving the interpretability problem will be to use interpretability tools to investigate the inner workings of AI systems. But another way of solving the interpretability problem will be to design the AI systems so that they will be more interpretable. I believe that neural networks are inherently difficult to interpret, so to design more interpretable AI, we will probably need to use AI systems that are not neural networks. I do not believe that neural networks will completely go away, but they need competition. While I have my qualms about neural networks, I do believe that we should still train AI systems using a variant of gradient descent/ascent.

There is another problem that someone should take a look at. We need to create, evaluate, and standardize new block ciphers, hash functions, and CSPRNGs, and we need to continue to analyze the existing cryptographic algorithms. Our block ciphers, hash functions, and CSPRNGs were not designed to run on energy efficient physically reversible hardware (by reversibility, I mean partial reversibility), and since energy efficient reversible computation is the future, we should design these cryptographic algorithms for reversibility. Our current cryptanalysis techniques do not incorporate very much machine learning, so by using machine learning, we can probably substantially improve our cryptanalytic techniques. While NIST's process of standardizing AES and SHA-256 were somewhat decentralized in the sense that these processes accepted input from the entire cryptographic community, as far as I am aware, these standardization processes did not include algorithms that automatically accepted cryptographic functions and returned results about their cryptographic security. We will also need to evaluate cryptographic functions that can be analyzed by these machine learning systems for applications that have not been developed yet. For example, what if we want a block cipher with 128 bit key size and 16 bit message size for some reason? What if we want a block cipher where the key spaces and message spaces are vector spaces over the finite field with 113 elements and where the cipher is composed of polynomial functions? We will need to evaluate that too.

We can make progress in solving the problem of AI interpretability while also analyzing block ciphers at the same time. Block ciphers are more mathematical than your typical machine learning data sets such as feet and toes. And if we are analyzing mathematical data sets such as those produced by block ciphers and other cryptographic algorithms, then since the data being analyzed is more mathematical, one should be able to interpret this data better using mathematical techniques. Not only does cryptography provide more interpretable data for machine learning models, but there is greater motivation for interpreting the AI models that analyze cryptosystems than there is for interpreting other forms of AI. These machine learning models do not necessarily need to consume large amounts of data or analyze overly complicated systems to be helpful. For example, AES has 1 byte long boxes, and machine learning models can easily analyze permutations of bytes. Since AES is also quite mathematical, it will not any problem at all for an AI system to analyze AES and for humans to interpret the results.

I have personally developed my own AI models for analyzing block ciphers, namely LSRDRs (L_{2,d}-spectral radius dimensionality reduction) and their generalizations. These AI models seem quite interpretable compared to neural networks for several reasons. First of all, they are mathematical in the sense that while they are trained using gradient ascent, the attained local maximum is often unique in the sense that if you train the LSRDR twice, you will achieve the exact same local maximum value (I only have a little bit of mathematical theory for why this should be the case, so consider this as an experimental result). LSRDRs also satisfy other interesting mathematical properties that make them quite interpretable. For example, an LSRDR of quaternionic complex matrices is often a collection of quaternionic complex matrices while there is no reason for the LSRDRs to respect the quaternions so much (since quaternionic matrices have even complex dimension, this only works if the dimensions of the reduced matrices are even numbers). I have even obtained a complete interpretation for some LSRDRs of collections of matrices where in each matrix, precisely one entry is non-zero. You cannot get such a complete interpretation of neural networks because a trained neural network will have a lot of noise in the final AI model.

Now, developing cryptographic functions for reversible computation may accelerate the development of reversible hardware. Those primarily interested in AI safety may want to wait as long as possible for the development of energy efficient computational hardware that will enable more advanced AI systems, so they will not want to use AI to develop reversible computation. But those primarily interested in cryptographic advancement should consider the development of more interpretable and safe AI as a pleasant side effect of their cryptography research.

-Joseph Van Name Ph.D.

P.S. Hmm. You all seem to be hating me for developing a cryptocurrency that you all hate, but I am not doing this research for Bitcoin am I? I am instead doing this research for a cryptocurrency that you do not value because you are anti-intellectual, and you do not value research at all. I hope you change your ways.
Jump to: