I didn't break. You are trash.
-Joseph Van Name Ph.D.
Joseph since this is your thread, can I ask you a question I'd like to know what types of research you are doing these days. I'm assuming you stopped doing math research but I'm not sure.
If I stopped doing research I would get really bored. And I really do not dealing with the s@#$ from people because most people act like s@#$ as much as possible because most people are s@#$. Suppose that A_1,...,A_r are real m by m matrices and B_1,...,B_r are real n by n matrices. Define the L_2-spectral radius similarity between (A_1,...,A_r) and (B_1,...,B_r) as rho(kron(A_1,B_1)+...+kron(A_r,B_r))/(rho(kron(A_1,A_1)+...+kron(A_r,A_r))*rho(kron(B_1,B_1)+...+kron(B_r,B_r)))^(1/2). If (A_1,...,A_r) are n by n matrices, then we say that a collection of d by d matrices (X_1,...,X_r) is an L_2-spectral radius dimensionality reduction (LSRDR) if the L_2-spectral radius similarity between (A_1,...,A_r) and (X_1,...,X_r) is locally maximized. The L_2-spectral radius dimensionality reduction can be found using the typical gradient ascent, so this should be thought of as a machine learning algorithm. I promise that LSRDRs satisfy magical properties. I have been dealing with things like LSRDRs and other machine learning algorithms for AI safety since these are useful for cryptocurrency research. Imagine that. The one cryptocurrency that you don't value is the one where the developer is doing the most research. This is because the cryptocurrency sector values stupidity much more than it values intelligence. Most people value stupidity over intelligence. And that is why we all deserve more pandemics.
...
Cry harder. Try harder.
-alterra57 Ph.D.K.W.
You are exactly what I am talking about when I say that most people act like s@#$ most of the time.
-Joseph Van Name Ph.D.
P.S. If you see research done by a schmuck at a university, you can be confident that such research is really s@#$ty. Universities are extremely unprofessional. Universities refuse to apologize for promoting violence, so their research is s@#$ by default.
Look who's talking, the fake Ph.D, hypocrite.
Nothing that you say has any value whatsoever. Please learn virtue or die during the next 5 pandemics. But I know that you choose death. You are suicidal.
-Joseph Van Name Ph.D.
No offense to your inferior genetics but some of us are built like walking tanks.
Pride comes before death, b@#$%.
-Joseph Van Name Ph.D.
It's not pride, it's basic biology and it also makes me refuse empty eggs like yourself.
Please go away. The only thing you are doing is demonstrating how much of a worthless pile of s@#$ you are.
-Joseph Van Name Ph.D.
You're a scammer roleplaying someone you're not, a bad one at that too. Go ahead, guess who the worthless pile of shit is.
-Go away. You are producing nothing of value here. The only thing you are doing is convincing me that it is not worth it at all to make any effort to prevent the next 5 pandemics.
If I stopped doing research I would get really bored. And I really do not dealing with the s@#$ from people because most people act like s@#$ as much as possible because most people are s@#$. Suppose that A_1,...,A_r are real m by m matrices and B_1,...,B_r are real n by n matrices. Define the L_2-spectral radius similarity between (A_1,...,A_r) and (B_1,...,B_r) as rho(kron(A_1,B_1)+...+kron(A_r,B_r))/(rho(kron(A_1,A_1)+...+kron(A_r,A_r))*rho(kron(B_1,B_1)+...+kron(B_r,B_r)))^(1/2). If (A_1,...,A_r) are n by n matrices, then we say that a collection of d by d matrices (X_1,...,X_r) is an L_2-spectral radius dimensionality reduction (LSRDR) if the L_2-spectral radius similarity between (A_1,...,A_r) and (X_1,...,X_r) is locally maximized. The L_2-spectral radius dimensionality reduction can be found using the typical gradient ascent, so this should be thought of as a machine learning algorithm. I promise that LSRDRs satisfy magical properties. I have been dealing with things like LSRDRs and other machine learning algorithms for AI safety since these are useful for cryptocurrency research. Imagine that. The one cryptocurrency that you don't value is the one where the developer is doing the most research. This is because the cryptocurrency sector values stupidity much more than it values intelligence. Most people value stupidity over intelligence. And that is why we all deserve more pandemics.
obviously you're a very brilliant mind. i don't think i could ever understand that type of math no matter how hard i tried but thanks for sharing. i think the world doesn't value people that do research like you. kind of sad but i guess its the truth.
what are your opinions about things like the halting problem in computer science and the stop button paradox? I'm assuming you agree with the halting problem but do you think there is a simple solution to the stop button paradox for AI?
I agree with the halting problem since it is not controversial among mathematicians and other experts and the proof is not that hard; the proof of the halting problem really simplifies Godel's incompleteness theorem. I am only familiar with the basics of the stop button problem, and I am not yet convinced that the stop button problem is what we should be focused on with AI safety.
1. The stop button issue is only a problem when we train an AI system to optimize a fitness/loss function. But our current AI systems optimize not just for having the right outputs, but they may also be regularized and thus their L_1 and/or L_2 norms may be minimized as well. In this case, we are training an AI to minimize a quantity that is not a function of the outputs of the network. What if we keep on using more and more regularized AI that performs well in practice but which does not clearly optimize certain outputs? In this case, the stop button issue may not even be important or the solution may not work since the AIs reaction to the stop button may not even be directly optimized for.
2. The stop button issue is a problem for centralized AI systems, but what about AI systems that are distributed and where each implementation of the AI is slightly different (with fine tuning for example with LoRAs we can easily retrain an existing AI for specific purposes)? In this case, a stop button may only stop some of the AI systems, but we still need to deal with the other AI systems.
3. Even if we have a solution to the stop button problem, we will still need to use AI interpretability to make sure that the solution is properly implemented in AI systems. This is why I consider AI interpretability to be the most fundamental aspect of AI safety.
-Joseph Van Name Ph.D.