Pages:
Author

Topic: Keyless encryption and passwordless authentication - page 4. (Read 2887 times)

full member
Activity: 224
Merit: 120
To me, it doesn't make sense. Yet. I just don't understand how you can identify someone without knowing at least one detail about them. 2FA (time based) works on a secret and the current time, changing every 30 seconds.

Encryption, works on a key, whether that's a shared secret key, or a public/private keypair.

Yes, the only problem with that is when they steal your 2fa privkey at the time of creation, or when your device time isn't exactly in sync, or when the user loses the privkey (because GA was in the stolen phone, etc)...

To me 2fa is not an excuse to replace a solid good randomized password made with a decent password manager (not online sites, free open source software) that also uses a very good password running in a secure OS unlikely to have random malware of the week sniffing.

Passwordless solutions have always been defeated at some point, they are way too dangerous. You can do a "one time", and then go asymmetric like with SSH you add public server keys to your client and never input login passwords again, but only if your OS is secured.

And very likely some of the passwordless proposals include fingerprinting you to the point of uniqueness. What happens when THAT info falls into the wrong hands? Same as with KYC/AML.
-------------------------
What you're describing is a real state of affairs. These concerns arise because, in my opinion, all of the modern technologies without a password that you are describing are not really that state of affairs.
If you change your password with your biometric data, then for the server, all you did was change your numeric identifier. No more and no less.
This is not at all what is offered in the technology described here.
It is not a variable key, it is not a session key which is somehow generated, distributed, used and transformed into a new key.

These are unique rules for the formation of each data packet, and completely independent of your desire, skill, amount of encrypted information, your biometric data, your passwords, keys and any actions.

For the server, it looks like a change of numeric identifier for literally each data packet.
The trick is that if the same symmetric system stands on the server, this change is equally deterministic for the server and for you, but not for the outside observer. Since this method does not use keys, there is nothing to steal except your entire device.

If you don't notice the key theft (it's a software key), you will immediately notice that your smartphone or desktop computer is being stolen from your home.

Since there is no key or password, all control is based only on derivatives of the event. Events combine the time factor (external time is always linear, and these marks are taken not every 30 minutes, as at Google, they are taken on each package and without the rules set by the programmer) external and internal time event counter. Just as you can't live the last second, so this system can't be the same as the second before. And the main role in this concept is your information, which is not encoded or transmitted, but indirectly by a one-way function influences the course of changes in the entire system.

In this concept of encryption, you can stretch to say that you are using some kind of key for each packet of data (not a message), and as I wrote earlier, the data packets are generated independently of your activity. This is a security feature of your closed link, it should always be closed if you have established a P2P connection.

But look deep into the technology, you will not be able to call it a key, it will not match the processes that are going on.

In this concept, your identifier is floating. It only applies to one data packet (not a message), it cannot be used for any other data packet.

What and why steal?



A distinctive feature of the keyless encryption system, as mentioned above, is the mandatory detection of any modifications.

A normal encryption system does not guarantee anything like this.

If, in any conventional key system of encryption, today you encrypt the word "Hello" with key "A", get the code "B", then tomorrow, with key "A" the word "Hello" again will show code "B".

That's not possible in a keyless system.
If you encrypt the "Hello" word at this second, you will get the "C" code. If you encrypt the "Hello" word again without interruption, you will get any cipher, but not the "C" code. Not only that, you can't do that, even if you want to.

That's the difference between keyless ciphers and key ciphers.

How does a transmitting and receiving system know the encryption and decryption rules, in this case the word "Hello"?

Note that any encryption does not happen by itself, but at least:
1) at this point in time;
2) in a certain numerical order of account of events in the system itself

Important note: taking into account only the time factor is not enough. To be more precise, physical time plays a crucial role only at the start of a communication session and in the first verification processes of your "partner".  There is no need to think that the system just counts seconds, this model is not viable and has little use in practice.

The system doesn't care what word will be encrypted, the important thing is that the system knows exactly what the Logical Time Tunnel (LTT) is working, it is now formed.
This is the LTT that has been formed, no other. It was made not by the programmers, not by the developers of the technology, but by the system itself, and one moment before encryption of the word "Hello". It's very precise and as definite as possible, no probability, but it's absolutely unpredictable for "Eva".

Therefore, the same Logical Time Tunnel is formed for both systems, so the word "Hello" is first encrypted in it, and then decrypted in it too.

Important note: in fact, the word "Hello" is not ciphered, the vector is ciphered, the link pointing to the temporary analogues of the elements, the letters of the word "Hello". It is very important to understand!!! This is the main principle.

And most importantly, the next LTT can only be correctly generated when the transmitted information up to 1 bit coincides with the decrypted information. There are no modifications.
 
This is beautiful and very useful. It is so unexpected that without a key it is possible to exchange information more accurately than with a key, which seems an inexplicable turn.

This is a first look. It's the opposite of what happens inside.

Gradually, we'll take it apart, all in detail.
It'll be even more interesting from here, I think, of course. 



Perhaps the attentive reader will have a question about how quickly the system will react to the modification?

If the modification is local, it will respond instantly in the command part of the data package.
If the modification is in the information part of the data package, then..:
- for data packet, in which false information is transmitted - instantly;
- for data packet, in which user information is transmitted - with delay.

Therefore, any decrypted user information is first assigned a status: "conditionally correct".
Then, if the following package is successfully received: "most likely correct".
And finally, when receiving the third data packet: "absolutely correct".

The data packet is only 304 - 516 bits, not the whole message.
So the user won't notice anything, he is doomed to always use only the information "absolutely correct".

The technical explanation of this checking scheme is about this:
1. The minimum value of time it takes to detect an information modification is the moment the cipher code hits the last and penultimate decryption round (7th and 8th rounds of encryption).
2. The maximum time it takes to detect a change even at the 1-bit level in the information portion of a data packet is equal to the time it takes to send the next 2 packets and receive the next 2 packets.

At this maximum time point, by default, a ban will be programmed to deny the decrypted information to the user.
legendary
Activity: 2030
Merit: 1573
CLEAN non GPL infringing code made in Rust lang
To me, it doesn't make sense. Yet. I just don't understand how you can identify someone without knowing at least one detail about them. 2FA (time based) works on a secret and the current time, changing every 30 seconds.

Encryption, works on a key, whether that's a shared secret key, or a public/private keypair.

Yes, the only problem with that is when they steal your 2fa privkey at the time of creation, or when your device time isn't exactly in sync, or when the user loses the privkey (because GA was in the stolen phone, etc)...

To me 2fa is not an excuse to replace a solid good randomized password made with a decent password manager (not online sites, free open source software) that also uses a very good password running in a secure OS unlikely to have random malware of the week sniffing.

Passwordless solutions have always been defeated at some point, they are way too dangerous. You can do a "one time", and then go asymmetric like with SSH you add public server keys to your client and never input login passwords again, but only if your OS is secured.

And very likely some of the passwordless proposals include fingerprinting you to the point of uniqueness. What happens when THAT info falls into the wrong hands? Same as with KYC/AML.
full member
Activity: 224
Merit: 120
1) Imagine that we play chess. We transfer our moves - by telephone, through open communication, we hang on the bulletin board, it does not matter. Between ourselves, we agreed that the game of chess is a distracting maneuver. In fact, we need each chess move to indicate a specific chess piece. Each move is still needed to move a specific piece. We agreed, and temporarily, that each chess piece indicates is associated with specific information. Denotes a part of the information that needs to be “encrypted and transmitted”, for example, this is a byte of our information.

2) We transmit to each other only "service information", only a link from which cell the figure should be taken and in which cell the figure should be placed. It’s just a chess move of some kind. All pieces are randomly located on the board, unknown how, for an external observer. Let in our chess, all pieces are allowed all moves, without discrimination.

3) I pass the move on my board: A5 to B2, but I do not indicate a piece, and only on the board of my partner it is clear that this is a “black elephant”. The "Black Elephant", by default, temporarily, for this communication session or for this data packet, is associated with some kind of information byte. Therefore, transmitting the digitized code of the move - I transmit the link, a vector defined unambiguously only in the reference frame selected for this data packet.

4) Note that the reference point - we can also change. The coordinate system and the starting point of reference can be like at any of the 4 corners of the chessboard (as it usually is), inside the chessboard, outside the chessboard. From choosing this parameter - the digital code of the chess code - will change. In any case, this is another uncertainty that is very relevant in cryptography.

5) This chess move, this link in this space, this vector, I additionally encode. I encrypt as good as I can. I have many more rounds of encryption, the last of which is the XOR operation with a one-time binary tape, its length is exactly equal to the length of the link cipher. This is the Vernam cipher class, with the only difference being that our one-time binary tape is never transmitted from me to my partner. Therefore, the final cipher is not vulnerable, persistent in the absolute sense of the word (K. Shannon theorem, proved in 1945).

6) In fact, I only encrypt the link, nothing meaningful information for the external observer, even if he decrypts it. Because he does not see the chess game, he does not see which piece this link indicated. A figure is information that I “transmit and encrypt” at this point in time.

7) Why then additional rounds of encryption? To encrypt information - they are not needed. And to prevent cryptanalysis using the Chosen-plaintext attack (CPA) method, for very large amounts of cipher, they will not hurt.
full member
Activity: 224
Merit: 120
Penetration and surveillance systems are developing.
We must consider their capabilities when developing encryption products.

Literally everything is being observed and analyzed:
- the level of power consumption;
- keystroke sounds (information is remotely taken off window panes - by laser);
- electromagnetic background of the monitor, allowing at a distance (about 300 meters) to determine the area of the mouse movement on the screen or move the active items "menu" windows;
- modulation of electromagnetic radiation at the points of mechanical contacts of electrical connectors (for example, a 3.5 jack from a headset inserted into the device, modulates the useful signal to the frequency of radiation of the device processor and successfully demodulates at a distance);
- removing information from the LED light bulb to signal system access to the PC hard drive (via a hidden spyware pre-installed on the PC. This is exactly what the Israeli intelligence agencies did with the help of a drone helicopter, which captures information through a window from the winchester LED at speeds of up to 6000 bits per second).


For these reasons, the system is designed in such a way that an external observer is not able to learn about the change in operating modes of our encryption system, through monitoring and analysis of power consumption. Unfortunately, this information can be obtained remotely by special means, and we take this into account.

I read about that LED hacking being able to read info from the LED of the machine or router was a rather un-nerving thought

Exfiltrartion via Router  -  https://www.youtube.com/watch?v=mSNt4h7EDKo&feature=youtu.be

Not something many people think about but is a valid attack vector and is in the wild now.

The above one is actually passing out some info if you are able to slow it down some more to capture it.

And this is a live attack with this very technique with drones.

https://www.youtube.com/watch?v=jHb9vOqviGA
-------------------
Yes, this is a real type of attack that is used in offline systems.

Another type of attack, which I did not mention in the last post, the modern vector of attack on offline (Internet) computers is a two-way connector using ultrasound through a conventional acoustic device, portable device or personal computer.
Interestingly, a normal speaker, notebook, even a modern smartphone, is able to not only emit in the ultrasonic range (above 22 kHz), but also act as a microphone for such signals.

In general, the situation with our personal security is not only bad, but it is also deteriorating.

That's why everything possible is taken into account when developing keyless encryption and data transfer technology.

Now back to the past post, to the question of encrypting and transmitting ordinary information and false, false information, the system is capable of doing it very organically.

For security reasons, the mode of dealing with fake information is exactly the same as with useful information. Absolutely identical, speed of all transformations of system, level and reliability of encryption, etc..
In system in general nothing changes in terms of load on computing resources and memory.
These methods (and there are others) do not allow an external observer to notice and analyze the work of the system on the difference in power consumption of the user device.

In addition, it is the keyless system has a unique protection against processing erroneous, modified data packet in any of its modes of operation.
But especially well it works in an encryption mode and transfer of the false information, more precisely at the moment of its reception and decoding.
It is possible because the rule of generation of fake (not given by the user) information is the same for both systems (or more) being in the closed by encryption communication channel.

All these systems have for their LTT and for their closed communication channel a unique formula for finding such information, based on current geometric events, defined for that moment in time.

 Therefore, any modifications in such a data packet are independent:
- or these are modifications of the noise origin;
- or it's the elements of thoughtful modifications for an attack, the system instantly sees these deviations at level 1 of the damaged bit.

It is clear that this effect of instant verification of any received data packet (when any 1 modified bit of the received packet is visible) is not present in the mode of work with ordinary user information. In this case, the modification at the level of 1 bit of information will certainly be visible, but later, after 1-3 next packet of received information, and it is the same, very, very quickly.

The reason is very simple - user information by definition has entropy, has natural uncertainty, so it is information, not expected data. So the error will be shown later because it will naturally break the symmetry of systems in the communication channel. Additional explanations are superfluous here.

The main and very useful thing is that any modification will sooner or later be visible because only keyless encryption systems use all information derivatives to select multiple encryption schemes on multiple rounds of encryption to form the next packet of information.



A data packet is the basis for everything in a keyless system.
It has to be formed in a unique way.
Its task is to transmit not only the coded information, but also service information to control and synchronize symmetric states of systems in the communication channel.
For this purpose, commands are used. Many commands carrying "service" information are duplicated by a hidden addition to the main user information - information that is fake, but has a logical value for the system itself, which has accepted this package.  This is such "secret" correspondence between systems over the main coded information and commands encrypted in each data packet. We call them "character commands".
These character commands, in addition, will confirm the basic commands of the system.
But as we strive for maximum secrecy, all commands have their own full-bit duplicates. All the duplicate commands have the exact opposite value of the command bit. This is done to ensure that the number of bits "units" relative to bits "zeros" does not change regardless of the command code.
For example, a command has a code: 00000000000000000000000001
Then her take will be recorded:         11111111111111111111111110

This is done so that the cryptanalyst cannot analyze the appearance of a command in the packet by measuring the density of any (binary) values of all bits relative to the selected value (e.g. the number "1" relative to "0").

If you do it on conscience, you should do it well, without exceptions. 



Geometric encryption methods, in fact, do not encrypt information, unlike other cryptographic systems.

They set temporal correspondence of information intended for encoding - to internal virtual elements of the system.

The system then forms a reference to this selected element.

The link and only the link is digitized and encoded. It is transmitted through open communication channels.

The link itself does not contain any coded information. Therefore, to use cryptanalysis or brute force method to the code of a link is meaningless and useless.

These principles contain the essence of not only geometric encryption methods but also keyless encoding methods.

Moreover, such model allows to change easily the place of each bit in the data packet intended for transmission to the open communication channel.

This feature, this advantage allows you to easily hide code sections such as were described in the past post, namely:
000000000000000000000000001

Especially when there's a full reverse take of that code:
111111111111111111111111110

Diffusion of each digit of the total code made up of the two above - will give the resulting code that is not similar to its original components.

Moreover, the method of full bitwise diffusivity (permutation of bits) applied to any code summed up with its inverse variant - will always give a new code in which all bits will be arranged in pseudo-random order.
Moreover, the number of units and zeros will always be in equilibrium.

This is the most unpleasant model for cryptanalysis.

The code, which contains no coded information and is obtained without a key, is not afraid of cryptanalysis at all, nor of complete search, nor of finding the key, nor of quantum computers of any complexity.



It is worth explaining that only the command part of the data packet is duplicated, which is from 8 to 20% of the capacity of the entire package.
The code containing the informational part of the package can be duplicated in the same way, but probably this makes no sense.

Command codes and codes duplicating these commands (logical repetitions of commands) are the same in size, but different in bit value.
They are not transmitted in clear text. Another round of encryption takes place.
The double of any command, like the command itself, must be decrypted, and only then check the inverse correspondence to the command of each bit.

Given the development of modern cryptanalysis, many of the capabilities of which are unknown to us, in this keyless encryption technology, after the bits are rearranged, a data packet (consisting of an information code, an instruction code and a code of duplicate commands) is encoded by another round of encryption - it is modulo 2 s disposable binary tape.

This one-time binary tape is obtained in the same geometric way that was described in previous posts. The model of internal geometric space is calculated in such a way that the maximum generation volume of one-time binary tapes occurring at the moment the space transformation is stopped is many orders of magnitude (!) Higher than the size of the information that needs to be encoded.

This binary random sequence is single and unique for each data packet. Therefore, as a result, in fact, we get a cipher similar to a cipher of the Vernam class.
hero member
Activity: 1241
Merit: 623
OGRaccoon
Penetration and surveillance systems are developing.
We must consider their capabilities when developing encryption products.

Literally everything is being observed and analyzed:
- the level of power consumption;
- keystroke sounds (information is remotely taken off window panes - by laser);
- electromagnetic background of the monitor, allowing at a distance (about 300 meters) to determine the area of the mouse movement on the screen or move the active items "menu" windows;
- modulation of electromagnetic radiation at the points of mechanical contacts of electrical connectors (for example, a 3.5 jack from a headset inserted into the device, modulates the useful signal to the frequency of radiation of the device processor and successfully demodulates at a distance);
- removing information from the LED light bulb to signal system access to the PC hard drive (via a hidden spyware pre-installed on the PC. This is exactly what the Israeli intelligence agencies did with the help of a drone helicopter, which captures information through a window from the winchester LED at speeds of up to 6000 bits per second).


For these reasons, the system is designed in such a way that an external observer is not able to learn about the change in operating modes of our encryption system, through monitoring and analysis of power consumption. Unfortunately, this information can be obtained remotely by special means, and we take this into account.

I read about that LED hacking being able to read info from the LED of the machine or router was a rather un-nerving thought

Exfiltrartion via Router  -  https://www.youtube.com/watch?v=mSNt4h7EDKo&feature=youtu.be

Not something many people think about but is a valid attack vector and is in the wild now.

The above one is actually passing out some info if you are able to slow it down some more to capture it.

And this is a live attack with this very technique with drones.

https://www.youtube.com/watch?v=jHb9vOqviGA
full member
Activity: 224
Merit: 120
In the world where hackers and such exists, I don't think keyless and passwordless authentication is possible yet. I'm not even satisfied with how fingerprint and face detection work yet especially if it involves a huge amount of money. I can't even think of a good security measure to counter those hackers, honestly. Even if there's a lot of security measures involve they are still able to hack accounts in just a few clicks.
--------------
We can resist hackers, we have to go against crooks.

No need to be afraid of them, no need to consider them almighty. They're just looking for our weaknesses.

 The question is whether this is possible in an existing security system. Our research, the news of cybercrime, unequivocally says no.

It's not possible to do it in this security system.
You're right about that.

All cyber defenders do is patch up holes.  And the holes appear faster than the speed at which they're fixed.

That's the way to nowhere. It's a game of mouse cats with a predetermined ending.

That's why we advocate a fundamentally new foundation for future security systems.

In fact, check my words, the cheater's main target is your password or private key.

This is the basis of the most massive attacks - phishing attacks.

All we offer is to remove the ground on which the phishing scam stands.

But the problem is, no one needs it.
It's how our world works.
You can't change it.
But you can and you have to make your own security island. It doesn't conflict with the basics of how this world works.

It's a hypothesis.



Bilateral authentication is the right thing to do. Today we are offered to recognize the original site - visually, follow the green lock in the left corner of the browser address bar, be careful!

And this is in the 21st century, the century of digital technology?

Isn't this an argument on the side of the opinion that the basis of the existing security system -feak.

Authentication, in all its variants, is protocols, sets of rules that are always based on old methods of user identification.

What do new authentication proposals do? They're doing a little above the wall of the old fortress.
What do cheaters do? They're putting a new section of stairs to climb over this new wall elevation.
It's an endless game.
In this game, it's always the cheaters' first move.
For that reason, this game makes no sense.

Until the main reason is eliminated - a permanent identifier.

Any biometric identifiers - they are even worse than the password, but it becomes clear only over time. Like any superstructure above the main wall of the fortress of our imaginary defense - biometric identifiers are vulnerable, they are extremely easy to fake. It's much easier than picking up a password.

It's a dead end. We need to change the base.

Our proposal, we need to change our numeric identifier.  We have to make it variable.  It's the only solution. And at first glance, it seems absurd. But gradually understanding this question, the methods and principles of geometric encryption, the question becomes clearer, so vividly and unequivocally that looking back, you wonder how you could not notice it before.



In the world where hackers and such exists, I don't think keyless and passwordless authentication is possible yet. I'm not even satisfied with how fingerprint and face detection work yet especially if it involves a huge amount of money. I can't even think of a good security measure to counter those hackers, honestly. Even if there's a lot of security measures involve they are still able to hack accounts in just a few clicks.


With the world of cryptocurrency, many people have much money on their digital wallets; for the safety of the users, the developers make a hashing of the passwords before the passwords are not encrypted; it was just a verification for the user's authenticity for having good security. They make the passwords harder and not prone to hacking they use the hashing to make a different text, numbers, and symbols combined together, and this is the essential thing today if you want to develop a website and system. But the hackers are ethical too, so the developers make another way of encryption this is the two-way authentication that sends the code to their users and verifies by the computer.
------------------
I'm talking about complicating passwords, hashing them out.

It's half a dimension again.

Look at this. You have invented and memorized (recorded) the original and very complex password (let it be authentication).

What do you send to the site when authentication occurs? The hash of this password. And there are no complex or simple hash sums.

What will the hacker do? It is possible to find a password, but we will pretend that it will not work.

He will just intercept your complex password hash sum. Basically, he does not need your password. The site doesn't know that complex password. The site only knows its hash.

That's it, you lost.
But why?
Because yesterday's hash works just like today's.

There's no protection in the path. It's a deception.
All TOR networks, VPNs, TLS protocols, everything's hacked as it turns out.
Why is that?
Because all these things are protocols, a set of rules based on old key and password technologies.

You will never have protection, and you will never be told about it as long as you have the same ID. The hash sum of your complex password.

Let's think about it together, shall we?



In a world where scammers crack any protection, steal passwords and keys, fundamentally new solutions are needed.

We need protection that's ahead of our time.

If you follow the path of complicating the existing system, without changing its foundations, this path is endless, because hackers are always one step ahead.

Change the foundation, in other words, remove passwords and keys from the security system. Then the competition with hackers will have a completely different result, in our favor.

The thieves will have nothing to steal, so there will be no interest in this activity as it now exists.

The thieves are feeling much better today than they did yesterday. They just sit at home, pushing buttons and making phishing and other attacks on us. The programs to hack into our systems are so cheap and available that almost any bad person can do it.

Who made their lives so easy?
The existing imperfect, holey security system. As long as this system only protects your personal data, the person doesn't care much.
But as soon as this same person has big money under the protection of the password key protection system, he will not feel secure.

I understand this, I also understand that it is time to change these principles of protection.



The moment has come when I was allowed to show images.

This is a scheme of three variants of the first round of vector-geometric encryption model, which I tried to publish on December 8th.

And I published an explanation of it on December 13th.
Take a look over there.

Take a look at the basics of keyless encryption technology, if you're interested:





It is a completely symmetrical encryption system, where the main mode of operation is keyless.
Both systems switch from one symmetric state to another through the processes of sending and receiving information.
Full identity of the state of the two systems is only possible if the information exchange between them is not only identical, but also correctly deciphered by both participants to the accuracy of one bit.



Building a keyless system by mathematical modeling is probably not an easy task, given the absolute rejection of repetitive processes. In encryption, repetition is the death of encryption.

On the other hand, to build such a model using the river of time and an infinite number of options for space is quite real.

In such a system, all events occurring in the virtual space-time continuum are not controlled by key information but by a multitude of unstable functions, most of which are geometrically related to their multiple arguments.
Among these arguments is the whole information flow without exception. Input information (the one to be encrypted), decrypted information, information in the form of intermediate code on all encryption rounds, is rigidly bound to its time stamps, is processed in time, so each certain part of information has its own unique event in the system. 
As a consequence, when such an encryption system is functioning, the digital code is processed not by any stationary algorithms, but only by those algorithms that are active at that particular moment in time, which are formed for that moment in time by the system (see below "Time Logic Tunnel").

And this is what we extract from it.
Derived from this, 2 important properties of this encryption model appear:
1) strict observance of the information decryption sequence;
2) absolute identity of the information being decrypted in relation to the encrypted information.

This model of encryption, at the decryption stage, completely excludes the possibility of any modification of the information.

Organization of processes of encryption and decryption of data - in parts, packets of information, allows the system to independently assess the integrity of the received data regarding the sent, information decrypted relatively encrypted, through analysis of the current state of the system relative to the past states of the system.

Estimation of states is simply their comparison on the basis of their mutual identity.



The main element determining the current state of the system is the state of its internal space.

The transformation of the system's internal space (see above Encoding Principle Scheme, "Internal Space Geometry") occurs in a continuous continuum with its internal time calendar in the period of the encryption system's operation and in correlation with the external time calendar in the moment the communication channel for the new session is organized.

Time labels of the external calendar-time are used only in separate episodes during the system operation, as well as for communication protocol operation, which performs the function of constant synchronization between 2 (and more) encryption systems.

The internal calendar time, on the contrary, is used only during system operation; the "time unit" for it is not the time length of the event but the fact of its occurrence in the system.

Due to the different nature of the unit of its "time", these two calendars-time have no common reference points, including metric points, except for the name of all units.

The connection of the internal space state with the time parameters of the external and internal calendar-time forms a dynamic model of the virtual world.

In this model any repetitive information - always occurs only in its unique "time", which is always linear and its values are never repeated.

Therefore, consistently repetitive data, any number of times, will always be processed by a completely new consortium of space with time.

This means that encryption will always occur using different algorithms, whether the information is repeated or not. No matter how many times it repeats itself, it will always be processed as completely new information. 



For keyless geometric encryption, you need a model that is not in static.
Such, dynamically changing model of space, can be organized differently.

It is interesting that restrictions in forms and schemes of construction of such model are not present.

Variants of construction of space when occurrence of effect of an interlock of its conditions is possible are completely excluded. In other words, a model in which the same state of space is repeated, either with a fixed period or without the law of periodicity - is unacceptable.

In spite of the fact that theoretically, the model of space can have any dimension, for example 2 or 3 dimensions (excluding time), mathematical n-dimensional spaces are also allowed, but its total size should always be no less than a certain calculated value.

The most rational, from this point of view, the model consisting of 3 levels of two dimensional space, each level of which is organized in its own way, changes according to its laws, and as a whole under the condition of space is understood the total state of all its three levels. 
The higher the dimensionality of space levels, the better the keyless encryption technology works, the easier the principles of the keyless encryption system are realized, but the more complex are the algorithms of space transformation calculated. 




The inner "virtual world" should have a certain (no matter what it is, there are a lot of variants) structure and geometrical form. These parameters can change, but should be known only in one, the present moment.   
The chosen geometry (figurativeness) of the space should be such that the number of variants of its transformation, change, was the maximum.
The rule is that the internal space ("GIS" on the diagram) must be constantly changing. Static is only allowed at one point in time ("LTT" on the diagram), in which one space section can only be used once for encryption.
GIS must be easy to control.
In a keyless system algorithms for continuous, serial transformation of the GIS from the old state to another new state must be introduced. This principle of continuity of any new state from the state of the past, carries out the connection of all states of the system, connected in a single chain.
The GIS transformation algorithms that create this connected chain are derived from all events occurring in the system.
This means a continuous and non-linear connection with all information processed without exception.
GIS consists of elements that are always moving within their area of movement (within their enclave, within their part of "habitat" in space).
The space from one of its states passes to the new one, first of all (but not only) by moving the space elements according to the prescription given individually to each element or group of elements.
As a result of transformation of space, the main measure of its "correct" new state is the complete renewal of all neighbors of each without exception element. If the transformation is carried out in such a way that the same elements that were before this transformation, i.e. the old neighbours, are left next to one chosen element, then such transformation is considered incomplete and the algorithms that carry it out are unsuitable. This is the effect of space-transformation loops, which is unacceptable in the technology of keyless coding.

This requirement is very fundamental to fulfill because one element of one enclave (one closed area of GIS), at one point in time (in one logical tunnel of time - LTT) - will be found to match the information to be encrypted.



The main element determining the current state of the system is its internal space - GIS.

Transformation of the system's internal space (change of "Internal space geometry") takes place in a continuous continuum with its internal calendar-time.

This parameter has 2 independent counters.

 1. B пepиoд нaчaлa нoвoгo ceaнca paбoты cиcтeмы шифpoвaния - пpoиcxoдит в кoppeляция вcex нacтpoeк c внeшними кaлeндapём-вpeмeнeм. Bpeмeнныe мeтки внeшнeгo кaлeндapя-вpeмeни, вo вpeмя paбoты, иcпoльзyютcя тoлькo в oтдeльныx эпизoдax, пo пpичинe paбoты пpoтoкoлa oбмeнa дaнными (DEP), выпoлняющeгo фyнкцию пocтoяннoй cинxpoнизaции мeждy 2-мя (и бoлee) cиcтeмaми в кaнaлe cвязи.

2. The internal calendar-time, on the contrary, is used only at the moment of system operation, the "time unit" for it is not the time length of the event but the fact of its occurrence in the system.

Due to the different nature of the unit of its "time", these two calendars-time have no common reference points, including metric points, except for the name of all units.

Linking the state of the internal space with the time parameters of the external and internal calendar-time, forms a dynamic model of the virtual world. In this model, any repetitive information - always occurs only in its unique "time", which is always linear and its values are never repeated.

For this reason, sequentially repeating data, any number of times, will always be processed by a completely new consortium of space-time, which means that the encryption will always be done by different algorithms.

Please note that this is not the case with standard key systems. The same information, no matter how many times it is repeated with the same key, will always be encrypted identically.

Which model is more "encrypted", do you think, keeps more secrets?



It is clear that in such a sensitive model, the correct configuration and the correct selection of algorithms is very important.
This work should be done taking into account the fulfillment of the “always new neighbor” condition for any element of the system.

It is also necessary to take into account the stability of the performed transformations to a possible loop, to the periodic hit of the same symbol in the same cell.

In other words, the selected set of transformation algorithms should not bring the system into a state of repeated or non-periodic cyclicity.
In any encryption system, the cycle can be calculated, this is a clear vulnerability and a loophole for cryptanalysis.

With each new transformation, each element of space, at any level of space, must begin its movement to a new location, only from the previous location, a connection of history appears, a continuous connected chain of all transformations is observed.

Just like in the blockchain, a chain of connected blocks, but with an analog of blocks, we have a state of space, which (in normal operation mode) is not saved, there is no need. The save mode of previously existing space states is possible for the implementation of the “restore point” mode by analogy with the restore points in operating systems. Such recovery points can be created by taking and saving screenshots of space and time counters at the right time.
 
Due to the strict interconnection of all system states and a direct dependence on the entire information exchange processed by the system on a point-to-point site, the difference in the entire information stream, even in one bit, is always noticeable, easily analyzed, and unambiguously calculated.

This error can be fixed by requesting a retry of this package. This principle of operation of a keyless encryption system provides absolute integrity control and the impossibility of discreet modification of any data packet, and therefore the entire information exchange as a whole.
We add one more rule to this brief description: if one element, from any one area of ​​space, was used for “coding” at least once, this entire area of ​​space (enclave) cannot be reused without a thorough transformation.

This is the implementation of the principle of combinatorics, if any law is applied to chaos, but the same to all elements of this chaos, then we will always get only new chaos, and we will never get order.

A good rule for our system, which has some kind of inside of itself that is not defined by an external observer, is chaos.

Any chaos, any internal uncertainty, random numbers and random variables are encryption friends and enemies of cryptanalysis.



Exactly the same procedures, changes of chaotic arrangement relative to each other, simultaneously occur with all "neighbors" of this element, which was used in "encoding" information, at this point in time, in this logical tunnel of time (in this LTT).

But then one interesting chain of events can be traced, which leads to even more interesting results.

The encryption principle strictly limits the use of more than one element of one enclave for "encoding once" (and actually only finding a pointer vector to this element, in this geometric encryption model principle) at one point in time.

Another principle suggests that the system (primarily GIS systems) - should not be in a static state.

We do not have a key, which dictates the order and regularity of changing all settings and states of the system.

So what should we do with these contradictions?

  There is both an interesting way out and a way to disguise.
You can smear useful, original user information, which is encrypted - fake, garbage "information" created by the system only in moments when there is no information for encryption from the user.

On the one hand this seems to be a drawback, because the system must simulate information exchange at times when it is not available.

On the other hand, there is not only the effect of disguising useful information - fake, we do not really need it.
And more interesting is the effect of hiding from an external observer the real amount of information exchanged by users. The external observer only sees what maximum size of information has passed through its observation point.
But the external observer has no idea how much coded information is in this flow, and whether it is there at all.

This is a real closed communication channel, not just encryption.

Tell me, what other encryption systems have such an interesting and useful effect in the communication channel they organize?



Objectively speaking, the function of generating a "fake" information exchange by the system itself, which simulates the original information exchange, is not obligatory, in principle one could do without it.

Strictly speaking, it is an additional service for users which is so easy to do in this technology that one does not want to refuse it.
All the more so, as mentioned in the last post, the more new chaos relatively old, the better, and this feature helps to do it continuously.

Anyway, studies show that mixing "fake" information well masks useful information from an outside observer and does not allow to analyze the information picture in the communication channel.
Specifically:
1) who is currently transmitting and who is receiving the information;
2) who was receiving and who was transmitting information during the whole historical period of time after the start of using the system;
3) whether there was any fact of information exchange between two users (Alice and Bob) or they were "silent";
4) how much information was transmitted from Alice to Bob;
5) what volume of information was transferred from Bob to Alice;
6) what type of information was involved in the data exchange: voice content, media content, text content, streaming digital file in upload (or upload) mode, etc.

Therefore, organized by keyless encryption technology, its own channel of communication is a well closed channel, which does not give an outside observer any information about the events taking place in the channel, except counting the maximum possible information exchange between participants.



This is not the end of the miracles of the geometric model of encryption.

If we have our own chaos, with its own level of entropy, the pseudo-random state of space elements allows us to create numerical random sequences of any desired length.
And since the static state of GIS is very small in time (and by events in the system), these random numerical sequences are also one-time.

This is a complete analogy to disposable binary tapes that can apply the "exclude OR" operation to every bit of code.

And this is the Vernam class cipher, the only absolutely stable cipher, in the absolute sense of the word.

And this is a very loud request...
After all, to get a cipher similar to the Vernam class cipher is the maximum theoretical possibility of cryptography in general.

Yes, and most importantly, there is no need to exchange these "disposable binary tapes" between Alice and Bob.

And that was the only drawback of the Vernam class cipher, which left this encryption only in top secret diplomatic missions. 



The key question remains in this keyless system:
 - how to receive reliable pseudo-random numerical sequence which entropy aspires to entropy of casual sizes? 

It is clear that any numerical sequence is easily transformed into a binary sequence of any length less than the maximum possible (less than its maximum information capacity).   

Again we return to our moving, dynamically changing, geometrical field of elements in which each element does not like constants, the same, neighbors.

To get a good pseudo-random sequence from this model is possible if each element is represented as a number temporarily located in some place of our space, space of Cartesian coordinates and to define an initial reference point in this space.

Now, in the obtained numerical shaped model, having at least 2 Cartesian coordinates, we can draw absolutely any functional curve, a chart of any function (the "X" axis is a set of values of the function arguments and the "Y" axis is a set of values of the function).

Which particular curve you will draw has no meaning. If we are sure of a random arrangement of elements of this system relative to each other.  All cells, through which the chart of the selected function passes - get to the sampling of the set of our numerical sequence.

 The value has only the maximum number of elements, through which the chart of the selected function will pass. We have to fulfill an important condition - the length of the derivative binary (measured in bits) sequence of the function defined by this graph must be no less than the encrypted numeric code (again, we measure in bits and perform the operation "exclude OR" to each bit).

Thus, in geometric cryptography, available methods and the ability to organize not only a fully closed channel, but also to implement a round of encryption, which uses disposable binary tapes, allowing to obtain a cipher similar to the ciphers of the Vernam class.

The symmetrical system eliminates the need to transmit disposable binary tapes over the communication channel. The information itself, or rather its derivatives, obtained from the current (and this is a variable) state of the system, both from GIS and LTT, gives the "key" to the same binary "keys" of any desired length.

And now it becomes even more clear why this system will see any modification of information, even at the level of one bit, why it is possible to fix the vector-geometric principles of encryption - an absolutely stable cipher of Vernam class.

Or this is the beginning of a new class of ciphers, a class of keyless ciphers, such ciphers in which each packet of encrypted data is encoded with its own set of "keys", a set that is not repeated in the future, but is absolutely clearly defined only by those systems that have organized their own closed channel. 



Without going into detail, but using the same logic of the virtual world model described, which is the basis for geometric encryption methods, it is easy to extract pseudo-random digital data that can replace useful information when needed.

As already noted, normal operation of the system does not require the user to enter his or her own information in a mandatory and continuous manner. In moments of pause or long silence, the system does not do any pause in time - it fills them itself with fake information exchange. This "not real" information flow has an absolute pseudo-random character, obtained by a strictly geometrical method, which guarantees both the maximum level of "randomness" of such information and the ease of its extraction, without additional computational operations, from unused, free at this moment of time, space areas.




The methodology of the geometric encryption method is based on the presence of a full-fledged separate virtuality, which operates in its own internal order. An obligatory attribute of such internal world - must be its own counter of time and events. This digital generator gives the system always new, never repeating digital values.  The external calendar time (it was written about it in detail earlier) counts (or receives data from the external environment) our astronomical calendar time, and the internal system calendar time (see posts before it) lives its internal life without common reference points with the external calendar time.
We need these conditions to provide the condition of "always new event" in the system regardless of whether the event is repeated, data for encoding is repeated or not. Both of these time calendars have the ability to be stopped for certain actions.
   
As already mentioned, the normal mode of operation is to transmit and receive data continuously, providing the external observer for analysis only one indicator available to him - the total amount of information exchange, which can only be possible in the observed period of time.

But this is not all troubles for the external observer. The matter is that the technology of vector-geometric encryption allows not encrypting at all the very information which needs to be encrypted and transmitted (and thus accepted and decrypted).

Again, it is a paradox. And again, at first glance, it is inexplicable!
It is only at first glance.

The matter is that in the offered model of encryption there is an organic possibility to use a method of "temporary correspondence" of internal elements of system - to elements of information intended for encoding.

It is such "temporary" contract which will quickly change for the new contract.

Let's imagine that two chess players sit down to play chess, but this is only a distraction. In fact, every move, every chess piece is a transfer of information corresponding to that piece. The moves are transmitted through open communication channels, but the true meaning of these actions remains behind the scenes.
If we look at the standard chessboard, then this model of space can accommodate 64 different elements, no more, this is the information capacity of this space.
Therefore, by the method of "temporary matching" we can assign logical matching to each element of this space (each piece) to any value of no more than 6 bits of information.

Then each "chess" move will mean passing one of the values of 6 bits of binary code.

But we cannot stop there either.
To describe a "chess move" we will not use direct instructions - on the corresponding chess piece, let it be a "bishop".

We will use the method of "reference", building a geometric vector and its digital description in binary code.
Instead of describing a move as "elephant D2 on B3", we will choose an initial reference point (and the initial reference point is not a constant, but a variable for each new move), e.g. a simplified case - the first corner of the chessboard, then D2 = 42, B3 = 33, and our move will be described (will be digitized) this way:  4233.
Further, only "4233" is encrypted in the rest of the encryption rounds.




Let's analyze what we hid, what we got, why these tricks?
 
1. Information about the "elephant", only we clearly knew that at this point, in this LTT, at this point of space will be exactly the "elephant";
2. Information about where and from where the "elephant" moved, a figure unknown to the outside observer.
Because the coordinates 42 and 33 are relative values, which depend not only on the actual location of the "elephant" in this LTT, but also on the starting point for this space in this LTT.
The starting point is a variable value for each stroke, for each element of "coded" information;
We haven't mentioned anywhere what exactly the value of 6 bits at this point in time in this LTT corresponds to the "elephant" in this LTT in D2!

Conclusion: "what figure", "where it was", "where it moved", all this in a single moment of time (more precisely in the period of time necessary for this operation with the selected single element) - no one knows, not even the developer of this software.

For the next "move", for the "encoding" of the next information element, another LTT will be selected, which will be used in a completely different GIS, with a different location of the "elephant" and all its neighbors in the past event, the past LTT.

Conclusion: Instead of encoding the information, we have digitized and encoded some undefined vector, some pointer, some reference - in some undefined reference system with an unspecified starting point of this coordinate system.

These are not clear questions for an external observer, and there is nothing to get stuck in the analysis because there is no key, there hasn't been and won't be.

Instead of coding and transfer of the information - we generate and encode "link" in variable space, on sense completely similar to an Internet link on a site in a network the Internet, but which lives one moment.

Does it make sense to decrypt the link, realizing that it does not contain the encoded information? It cannot contain encrypted information - by definition.

Thus, the function of the variable point of reference of the coordinate system allows us to get the coordinates of the displacement vector - different digit capacity. The minimum length of the reference code in bits will be when the initial datum point coincides with the coordinate system boundary or is inside the element system. If the initial datum exceeds the boundaries of the elements location field of the selected space area (enclave), the digit capacity of the vector, references, or more precisely their digital description, will be increased.

  The technology of geometric encryption has the possibility to work with the variable digit capacity of the output code relative to the input one. It turns out that any information will be transmitted by a cipher code of unknown length, with the digit capacity not defined for an external observer. And this makes it very difficult to cryptographically analyze the message.



So, the most unusual and most important thing is managing the encryption schemes of the information itself and the changing internal state of the system.

If such a "live" system is in a normal operating mode, it must be movable. Its natural state is mobility through transformation of its internal states. For this reason, in the normal working mode (and there are others), for the organization of continuous internal transformations, the system monitors the moments of information input and understands the moments when the information does not arrive. At these moments the system itself generates, necessarily encrypts all the rules, transmits data packets, this complete analogue of live information.

By default, "information" means data provided by the user, intended for encoding. The fact that the technology is in a state of "user talk" when the user is silent - to replace the "own talk", although it does not look familiar, but to ensure the secrecy in the channel - is necessary and useful.
Transformation of the system accompanied by information flows (including but not limited to) created by the system itself is mandatory.



Penetration and surveillance systems are developing.
We must consider their capabilities when developing encryption products.

Literally everything is being observed and analyzed:
- the level of power consumption;
- keystroke sounds (information is remotely taken off window panes - by laser);
- electromagnetic background of the monitor, allowing at a distance (about 300 meters) to determine the area of the mouse movement on the screen or move the active items "menu" windows;
- modulation of electromagnetic radiation at the points of mechanical contacts of electrical connectors (for example, a 3.5 jack from a headset inserted into the device, modulates the useful signal to the frequency of radiation of the device processor and successfully demodulates at a distance);
- removing information from the LED light bulb to signal system access to the PC hard drive (via a hidden spyware pre-installed on the PC. This is exactly what the Israeli intelligence agencies did with the help of a drone helicopter, which captures information through a window from the winchester LED at speeds of up to 6000 bits per second).


For these reasons, the system is designed in such a way that an external observer is not able to learn about the change in operating modes of our encryption system, through monitoring and analysis of power consumption. Unfortunately, this information can be obtained remotely by special means, and we take this into account.
full member
Activity: 224
Merit: 120
The number of bitcoins lost due to the loss of keys or the death of the key keeper is huge and is growing every year. The theft of our confidential information, passwords - is growing. I get new confirmations of my position that new passwordless and keyless systems will be in demand. Here is a fresh example.
Positive Technologies experts summed up the results of the third quarter of 2019. Every fifth attack was directed against individuals, with almost half (47%) of all data stolen from them - these are credentials in various systems (logins and passwords). For example, the Clipsa Trojan is able to covertly “mine” cryptocurrency, steal passwords, change the addresses of crypto-wallets, and also launch brute force attacks against WordPress-based sites.
full member
Activity: 1484
Merit: 136
★Bitvest.io★ Play Plinko or Invest!
In the world where hackers and such exists, I don't think keyless and passwordless authentication is possible yet. I'm not even satisfied with how fingerprint and face detection work yet especially if it involves a huge amount of money. I can't even think of a good security measure to counter those hackers, honestly. Even if there's a lot of security measures involve they are still able to hack accounts in just a few clicks.


With the world of cryptocurrency, many people have much money on their digital wallets; for the safety of the users, the developers make a hashing of the passwords before the passwords are not encrypted; it was just a verification for the user's authenticity for having good security. They make the passwords harder and not prone to hacking they use the hashing to make a different text, numbers, and symbols combined together, and this is the essential thing today if you want to develop a website and system. But the hackers are ethical too, so the developers make another way of encryption this is the two-way authentication that sends the code to their users and verifies by the computer.
full member
Activity: 224
Merit: 120
South Korea’s largest cryptocurrency exchange, Upbit, has notified its users of the theft of tens of millions of dollars in cryptocurrency from its wallet.

According to Lee Seok-Wu, the head of the Dunamu managing company exchange, on Wednesday, November 27, at 13:06 from the “hot” Ethereum wallet Upbit 342 thousand ETH (about $ 50 million) were transferred to an unknown wallet (0xa09871AEadF4994Ca12f5c0b6056BBd1d343c029)
jr. member
Activity: 168
Merit: 2
mada mada dane
In the world where hackers and such exists, I don't think keyless and passwordless authentication is possible yet. I'm not even satisfied with how fingerprint and face detection work yet especially if it involves a huge amount of money. I can't even think of a good security measure to counter those hackers, honestly. Even if there's a lot of security measures involve they are still able to hack accounts in just a few clicks.
full member
Activity: 224
Merit: 120
Today, even a poorly trained user can do a phishing attack. There are ready-made programs for this. Everyone needs to know about this.

Here's a nasty fresh example of how they might attack us:

Large online services use two-factor authentication (2FA) to protect accounts. Usually its implementation comes down to the fact that in addition to the login and password, you must enter a one-time code sent in SMS or push-notification to the mobile number specified during registration. Until recently, 2FA was considered a relatively reliable anti-theft system, but now there are already ready-made tools that make it easy to overcome it.
One of them is Evilginx 2, which we will talk about. This is a reverse proxy server and a ready-made framework for performing a MITM attack to bypass two-factor authentication. Everything that is possible is automated in it.
Evilginx 2 has the super ability to create a signed certificate for a fake site using the client’s free and fully automated Let’s Encrypt Certification Authority. This helps the attacker to use HTTPS and decorate the phishing site with a green lock in the address bar. As a result, the fake is visually indistinguishable from the original. Plus, Evilginx 2 independently detects and intercepts valid cookies, and this is the main component of a successful hack.

We are used to the fact that all hacker tools are written for Linux, however Evilginx 2 is available both on Windows and as a Docker container.
full member
Activity: 224
Merit: 120
Yes it is.

Modern technology is weak and full of vulnerability. In my opinion, I’m not an expert, but the problem is somewhere in the beginning, in the very basis of authentication offered to us.

For example, Microsoft writes so eloquently in its whitepaper that password authentication has outlived itself. And then he offers to build a new building from old bricks: password + biometrics + key. But this is the molding of everything old. And all biometrics are many times weaker than password methods, I mean, it is easily imitated. And here is the result.

My vision is this error in the permanent identifiers that are assigned to the client, and on the basis of which we are authenticated. This needs to be changed because a persistent identifier is the target of the attack. And the kidnappers succeed. The number of abductions is growing !!!

My suggestion is a variable identifier. Then abduct him without meaning.



An identifier that is constantly changing is an interesting thing, but only if it is completely unpredictable. Absolutely.
And it would be impossible to predict.
And he must change very often.
And these changes should be synchronous with the server, in the sense that the server should know exactly what it is now, but absolutely should not know what it will be in the next moment.
But these requirements contradict each other.
This is the first look.

The property of determinism and pseudo-randomness - coexist in nature. Because they do not contradict each other, if everything is properly organized.

If we had a keyless encryption system, we would be able to recognize the digital code that we accept.
And if we can identify a digital code according to the pattern: “ours” - “alien”, then we will identify this code.
Remember that the next packet that we accept is a completely different code, a different cipher, a different identifier, regardless of the information that is encrypted in it.
OK?
And this means that we identify its sender.
And this means authentication by a variable identifier, in case of successful "verification" of the code.

If we use everyone’s favorite mathematical models of encryption, mathematical models for describing processes, then I don’t know how to make such a contradictory system.
And if you use geometry and a fresh head, then it turns out you can try.
Studies show that it’s not very difficult, you can.

Generally speaking, we need a randomly selected virtual space, time as a guardian of its constant changes and information "for encryption" - from which first and second order derivatives will be extracted. of course, derivatives of geometric nature, always unidirectional.

Yes, another interesting property of such a system is that it doesn’t just need a key, it is dangerous by definition, because the key is a certain regularity. And any regularity, repeatability is the worst enemy of encryption.

I can offer, as an option, such a scheme of the principles of the geometric model of keyless encryption (next post):



I just found out that I'm not yet allowed to post images.

Please, anyone interested, open this link to the scheme displaying the first 3 principles of vector-geometric encryption.





Why do we need a fundamentally new technology based on the geometry of virtual spaces? Could it be better to improve mathematical methods?

In fact, a new approach to encryption may be in demand. The new technology of post-quantum passwordless authentication, keyless encryption and instant verification of any amount of data is:

1. A new geometric method of vector coding, provides high speed with minimal load on the processor;

2. Does not require the mandatory presence of a key function in the processes of encryption and decryption of information;

3. Passwordless user authentication through post-quantum variables, deterministic digital identifiers.

4. Without the possibility of a phishing attack in the client-server version, with mandatory mutual passwordless authentication in both directions, both the server itself and the client.

5. Keyless coding technology generates a post-quantum cryptographic code, reasonably resistant to any type of cryptanalysis, given the appearance of quantum computers and quantum attacks;

6. Without the ability to identify correlation patterns (including keys) by brute force attack;

7. Without the possibility of hidden modification of the message, even at the level of one bit of information, special or “noise” imperceptible violation of the integrity of encoded or decoded data;

8. It is absolutely resistant to attacks based on matching selected plaintext with a cryptographic code (Eng. Chosen-plaintext attack, CPA);

9. The ability (without need) to use as keys - ordinary user information, of any size, type and complexity, and any of its parameters will not affect the quality of the encryption code;

10. Absolutely accurate (up to one bit) and “fast” (or continuous) verification of any amount of transmitted (or received) information;

11. The observer in the middle is not able to observe:
   1) who gave information to whom (or from whom) information;
   2) how much information is transmitted and / or received;
   3) whether there was any information exchange between users;
   4) all "pauses" or the time of "silence", of any duration, is filled
     and fully encoded exactly as the information itself;

12. Provides users with the ability to identify and eliminate the “middle attack” or “listener presence” - easily and independently.



As you can see from my previous post, an innovative approach to encryption that combines keyless and vector-geometric principles can provide such a staggering result.

We don't know how to achieve this by mathematical methods. Mathematics is always the law and exact calculation. Both of these things are great in themselves, but inappropriate where you need to hide the encoding algorithms. Where you need to choose an encryption scheme without using a key, and therefore without certain instructions.

The virtual space-time continuum, which has no constant certainty, in combination with the geometric coding method, easily allows you to abandon the pre-defined encryption scheme {it means giving up the key). Moreover, it allows to change this scheme elegantly and unpredictably, to change the encryption algorithms to new ones as often as necessary.



And what's interesting is that it's impossible to make a system of encryption without a key, reliable, easy, hiding the traffic of exchange of useful information - without authentication without a password. These are two sides of the same coin. They either exist at the same time or none exist.

For the consumer, this is great.



This is an explanation of the scheme, which can be viewed at the link (I do not have the right to post the scheme right away):

https://imgur.com/swVGL7L

 Yellow squares are GIS cells (the Geometry of the Inner Space is the one that exists at a given moment in time, or in other words, in this Logical (tuned) Time Tunnel), which constantly change their coordinates non-linearly depending on the data of information exchange.
 The reference point selected for a given LTT (the Logical Time Tunnel is a point in time simultaneously fixing everything up to a single state and system settings) - is selected by the system from a variety of algorithms with reference to the history of both past received and transmitted data.
 The Zero Axis for a given LTT is dynamically selected in the same way (it must be understood that its choice in this LTT will necessarily differ from the next LTT).
 Data for encoding - generate an unpredictable stream of “parameters” for transformation.
 “A”, “B”, “C”, “D” - in yellow squares these are the elements of GIS corresponding to the data for encoding (for convenience, but no more, they have the same letter designation).
 “X” and “Y” are either symbols participating in the exchange or not, you need to know the instantaneous parameters of their location in this LTT to calculate the coefficients of the desired vector of the relative vector “XY”.
 “A”, “B”, ... “X”, “Y” - for each new LTT they have new coordinates.

 The choice of options for constructing spaces (i.e., specific GIS), their construction and their options for transformation are endless a priori (like a map of the streets of any city on earth).




In the scheme described above, explaining the very first principles of geometric encryption technology. Such a scheme may not use a key. In all encryption systems, the key selects an encryption scheme.

Here, the circuit chooses itself based on "its history", on new information and on the time during which the system processes it.

As a result, during the functioning of such an encryption system, a digital code is processed not according to any stationary algorithms, but only according to those algorithms that are active at this particular moment in time, which are generated for this moment in time by the system (“Logical Time Tunnel”).

Therefore, there are 2 important properties in this encryption model:
1. strict observance of the sequence of decryption of information;
2. The absolute identity of the decrypted information regarding the encrypted.

Such an encryption model, at the stage of decryption, completely excludes the possibility of any modification of information.

The organization of the processes of data encryption and decryption - in parts, packets of information, enables the system to independently evaluate the integrity of the received data with respect to the sent data, the information decrypted with respect to the encrypted one, through the analysis of the current states of the system relative to the past.



There is an axiom in cryptography, any permanently acting encoding rule will always be a loophole for a cryptanalyst.

The key is also a special kind of rule that is applied when encoding and decoding. This technology does not have this rule, there is no key, there is no need to use other rules.

The system itself generates its own rules, partly due to the information itself, which partially performs the function of a key.

Information is always a new stream, which means that the system always has a new key, as it were, a key that is somehow applicable but only to the same information and only at this point in time to encode and decode the same information.

But this is a cyclical logical paradox, like this, the information is applicable in some way to the information itself ... to itself, for encryption ... and for decryption ...

Sounds like nonsense?

This is a different perspective on keyless encryption methods, and when you go deeper into this technology, it becomes clear that there are no paradoxes here.

This is a well-coordinated working information-temporary ratchet, with conditionally infinitely updated rules.



Our goal is to give a first description and confirm the possibility of a keyless method of encryption based on original encoding methods found in constructed and correctly organized, structured, spatio-temporal unidirectional virtual continuums.

At the same time, the properties and capabilities are observed:

1) instant verification of a large amount of information;
2) alternative non-scalable blockchain;
3) the absolute resistance of the code to brute force attacks;
4) resistance to attacks by matching any amount of open source code with its corresponding cipher;
5) the justification of the complete impossibility of modifications, changes in the integrity of the cipher code of a message received by the keyless vector-geometric encryption methods, at a fundamental level of the functioning of this technology;
6) the possibility of passwordless authentication using a new type of identifier: variable and strictly determined at the same time.

To get such advantages and opportunities that come only from the keyless encryption technology itself is a tempting prospect for our secure future.



Once again, let's ask a question, why change the well-proven key and password technologies to some poorly understood keyless and password-free ones?

Observations of events show that this makes sense.

Here's a famous, fresh example:

There are vulnerabilities that affect Intel Platform Trust (PTT) technology and STMicroelectronics' ST33 TPM chip. These vulnerabilities in TPM chips allow stealing cryptographic keys.

A team of researchers from the Worcester Polytechnic Institute (USA), the University of Luebeck (Germany) and the University of California at San Diego (USA) discovered two vulnerabilities in TPM processors. Exploiting problems commonly referred to as TPM-FAIL allows an attacker to steal cryptographic keys stored in the processors.

This chip is used in a wide variety of devices (from network equipment to cloud servers) and is one of the few processors that have received CommonCriteria (CC) EAL 4+ classification (comes with built-in protection against attacks through third-party channels).

Here is the price of error in key and password technologies.

And we use the network to transmit important information, I wonder if this chip is installed in our network section?
Maybe it's worth checking?



I see danger in technology that has keys.  One attentive user will definitely be safe because they can use the keys correctly.

But overall, statistically speaking, keys and passwords will always cause problems for a lot of people.

And there is no other way out than to switch to new technologies that will no longer have old problems.

I think it is modern when continuous development of computer technology allows you to find new algorithms of work.

Keyless encryption is a new, non-mathematical, next-generation  code generation method with 2 modes of operation, the main mode - without keys and an additional mode - with the ability to use any information as a key. 
The proposed technology of keyless encryption has nothing in common with known in cryptography keyless primitives, unidirectional functions that do not use many keys, have a single key that is used continuously.

The technology of keyless, vector-geometric encryption is not based on complex mathematical apparatus, on mathematical paradoxes of number theory, which seem to us insoluble for polynomial time only in sets of astronomically large values.

This encryption method is based on the original, coherent, rationally organized geometrical model of internal space-time, with the properties of a full virtual continuum, which is continuously changing by hybrid functions, the arguments of which are many dynamic event and current parameters.
full member
Activity: 224
Merit: 120
Here is an example of how phishing works on the blockchain:
"As soon as the user entered the wallet, or created a new one, Nginx replaced it with his own on the fake server. Criminals accessed information from the graph sharedkey, password, secondPassword, isDoubleEncrypted, pbkdf2_iterations, accounts."

And further:
"According to information from security specialists at blockchain.info, this phishing campaign is one of the largest in history ..."

Moreover:
"The experts also found confirmation that these attackers were involved in the creation of several so-called HYIP projects, such as: flexibit.bz, verumbtc.com, hashminers.biz.

Cisco researchers said fraudsters earned $ 50 million in cryptocurrency over the past three years. It's about losing users all over the world. "

What other examples are needed to understand that key technologies are very dangerous.
sr. member
Activity: 1050
Merit: 286
I sure nobody still invented better than OAuth2 over HTTPS. It is absolutely simple and it really works
-------------------
And these are facts confirming the above about the quality of Microsoft OAuth 2.0!

Do you think they all tell us that there is a hole in it?

Read:

Security researchers from CyberArk, an Israeli company, have discovered a vulnerability in the Microsoft Azure cloud service. The problem affects certain applications that use the Microsoft OAuth 2.0 authorization protocol, and its operation allows you to create tokens for entering the system. In this way, attackers can take control of victims' accounts and act on their behalf.

Experts have discovered several Azure applications released by Microsoft that are vulnerable to this type of attack. If an attacker gains control over domains and URLs that Microsoft trusts, these applications will allow him to trick the victim into automatically generating access tokens with user permissions. It is enough for the criminal to use simple methods of social engineering to force the victim to click on the link or go to a malicious website. In some cases, an attack can be carried out without user interaction. A malicious web site that hides the embedded page may automatically trigger a request to steal a token from a user account.

Such applications have an advantage over others, as they are automatically approved in any Microsoft account and, therefore, do not require user consent to create tokens.

Be careful with products that advertise "software authorities."
For me, it is possible but it is not safe for the users and especially the owner of the particular wallet because as an owner of a particular wallet why you do not make an authentication for your own wallet, it is very important to secure your money from any attackers or hackers. I can say that we should be alert and aware of every decision and steps we are making so that in the future we do not regret that. Just like what they are saying about azure, there some people saying that there are attackers can bypass every accounts.
full member
Activity: 224
Merit: 120
Phishing is possible only if you have a persistent identifier. In addition, the server checks you, and you are the server? In keyless encryption technology in the client-server model, phishing is not possible because your identifier is always variable. And the check goes in both directions. This makes the transmission and reception protocol of the encryption system itself. If this were not so, then the encryption scheme would be either constant or predictable. This would be an ordinary cryptographic keyless primitive, of which there are a lot, they are called unidirectional functions and so on.
full member
Activity: 224
Merit: 120
I sure nobody still invented better than OAuth2 over HTTPS. It is absolutely simple and it really works
-------------------
OAuth 2 is a protocol.
It is based on keys and passwords, on ordinary cryptography.
Everything would be that good, if not for the attacks, not for the theft of password information, phishing.

Look, some points of this protocol, everything is trivial.
 
1. Customer ID and customer secret
After registering the application, the service will create client credentials - client identifier (client ID) and client secret (client secret). The client identifier is a publicly available string that is used by the service API to identify the application, and is also used to create authorization URLs for users. The client’s secret is used to authenticate the application’s authenticity for the service’s API when the application requests access to the user's account. The secret of the client should be known only to the application and API.
What's good". Your secret is your problem.

2. The user authorizes the application.
When a user clicks on a link, he must first log in to confirm his identity (unless, of course, he is logged in yet). After that, the service will prompt the user to authorize or refuse.
Again danger.

3. Type of authorization permission: Implicit.
The implicit type of authorization permission is used by mobile and web applications (applications that run in a web browser), where the confidentiality of the client’s secret cannot be guaranteed. The implicit permission type is also based on user agent redirection, and the access token is passed to the user agent for further transfer to the application. This, in turn, makes the token available to the user and other applications on the user's device. Also, with this type of authorization permission, the application is not authenticated, and the process itself relies on the redirect URL (previously registered in the service).
The implicit type of authorization permission does not support refresh tokens.
What is reliable here? Applications that just downloaded?

4. Type of authorization permission: credentials of the resource owner.
With this type of authorization permission, the user provides the application directly with their authorization data in the service (username and password). The application, in turn, uses the received user credentials to obtain an access token from the service. This type of authorization permission should be used only when other options are not available. In addition, this type of permission should be used only if the application is trusted by the user (for example, it is part of the service itself, or the user's operating system).

What a twist! I have to understand the applications that I installed myself! Yes, this is the usual system of trust: "I believe" - ​​"I do not believe it."

What did you find special and reliable in OAuth2 over HTTPS?

Can we talk about cryptography on elliptic curves, the most reliable in the world, on which the entire blockchain is supported and the crowd of believers believes this?



I am amazed no one has mentioned there microsoft cause it's one of the early adopter among huge companies. Passwordless authentication is good at some point cause makes it's more harder to get victim of hackers or phishing and etc thanks to Multi Factor Authentication. I think if you are interested in it, you must read what's written on this page of Microsoft and also watch videos, link here: https://www.microsoft.com/en-us/security/technology/identity-access-management/passwordless
I agree with OP, we really need something like that and I am amazed why some companies haven't even think about that, especially Ledger and etc which aim security of crypto wallets.
-------------------------
!
Thank you very much for the thematic link. I will try to deal with the material. I can’t understand the video, because I don’t speak English, to my shame.

In response, for my part, I want to share interesting analytical material that I found on the Internet and edited.

I do not want to escalate the fear of those present here, but you need to know this if you study the issue of security - for real.

This material reasonably answers important 2 questions:

1. Is cryptography on elliptic curves so safe as we think?

2. Are quantum computations really dangerous for
modern public key cryptosystems?

In higher circles, official organizations, whose activities are directly related to cryptography, since 2015, there is a lively activity.
Why everything so suddenly turned up so hard, no one explains to us.
They probably know more than they say. Yes, and hide the ends ...

The competent organizations involved in setting universal technical standards are very noticeably concerned about the problems of the so-called quantum-safe cryptography. Here are the facts that you should pay attention to, even to us, non-specialists in the field of cryptography.

The next international symposium entitled “ETSI / IQC Workshop on Quantum Secure Cryptography” (https://www.etsi.org/events/1072-ws-on-quantumsafe was held on September 19-21, 2016 in Toronto, Canada, 2016). To emphasize the significance of this event, it should be clarified that ETSI is the European Telecommunications Standards Institute (that is, the industry equivalent of the American NIST, the main standardization body in the United States). And IQC, respectively, is the Institute of Quantum Computing at the University of Waterloo, that is, one of the world's leading research centers that have been dealing with cryptography problems in the context of quantum computers for more than a dozen years.

With such solid organizers of the event, not only leading scientists of academic structures and industry, but also important people from the leadership of transnational corporations and government departments of Europe, North America, Japan, China and South Korea were noted among the participants of the symposium.

And besides, there are also big chiefs of special services involved in the protection of information in states such as Britain, Canada and Germany.

And all these very busy people gathered in Toronto, back in 2016, to discuss how to strengthen cryptography to withstand technologies that, even according to the most optimistic estimates, will become a real threat in twenty years, at least.

If we take into account the fact that, almost simultaneously, in August 2016, NIST (USA) officially announced the launch of its own large-scale program for the transition from traditional cryptography to “post-quantum” cryptography, then the conclusion will be quite obvious.

In the world of cryptography, big changes have already clearly begun. And they started up somehow very hastily and even with some signs of panic. Which, of course, raises questions. And that's why.

In the United States, the first official signal that an urgent need to do something with the modernization of traditional cryptography was August 2015. It was then that the National Security Agency, as the main authority of the state in the field of ciphers, issued a statement on significant changes in its basic policy, in connection with the need to develop new standards for post-quantum cryptography, or, briefly, PQC (National Security Agency, Cryptography today, August 2015 )
The parties involved in this process, and the NSA itself, stated that it considers the present moment (this is still 2015-2016) the most suitable time to come to grips with the development of new protocols for public-key cryptography. Such cryptography, where the strength of the cipher will not depend on calculations using quantum computers.

Naturally, the idea comes that someone somewhere, secretly from the rest, still built a real quantum computer, back in those days. And since the most visible and decisive initiative for the early transition to a new, quantum-safe cryptography was demonstrated by the NSA, it is easy to guess which state comes to mind in the first place. Having not only the largest budget for such initiatives, but also all the necessary scientific and technical capabilities. The NSA, an organization highly classified and secretly able to use the most powerful supercomputers on the planet.

In an open community of cryptographers, puzzled by the haste of new initiatives, there are naturally a lot of other various speculations to explain what is happening. The most informative, perhaps a review work, summarizing and comparing all such hypotheses and assumptions without a final answer, can be considered the well-known article “Puzzle wrapped in a riddle”, prepared by the very famous cryptographers Neil Koblitz and Alfred Menezes at the end of 2015 (Neal Koblitz and Alfred J . Menezes, “A Riddle Wrapped in an Enigma”).
In order to make it clearer why it makes sense to focus on the facts precisely from this analytical work, two points should be briefly clarified.
First: what place do its authors occupy in open academic cryptography.
Second: how closely their own scientific developments are intertwined with the NSA's initiatives to accelerate the transfer of used cryptographic algorithms to other tracks.

The American mathematician and cryptographer Neil Koblitz, is (along with Victor Miller) one of those two people who in 1985 simultaneously and independently came up with a new public key crypto scheme, called ECC (this is, we recall, an abbreviation for Elliptic Curve Cryptography , that is, "cryptography on elliptic curves").

Without going deep into the technical details of this method and its difference from the RSA cryptographic scheme that appeared earlier, we note that ECC has obvious advantages from the point of view of practical operation, since the same theoretical stability of the algorithm is provided with a much shorter key length (for comparison: 256-bit ECC operations are equivalent to working with a 3072-bit module in RSA). And this greatly simplifies the calculations and significantly improves the system performance.
The second important point (almost certainly related to the first) is that the extremely secretive NSA in its cryptographic preferences from the very beginning began to lean in favor of ECC. (!)

In the early years and decades, this reached the academic and industrial circles only in an implicit form (when, for example, in 1997, an official of the NSA, Jerry Solinas, first spoke at the Crypto public conference - with a report on their modification of the famous Koblitz scheme).

Well then, it was already documented. In 2005, the NSA published its recommendations on cryptographic algorithms in the form of the so-called Suite B (“Set B”) - a set of openly published ciphers for hiding secret and top-secret information in national communication systems.

All the basic components of this document were built on the basis of ECC, and for RSA, the auxiliary role of the “first generation” (!) Was assigned, necessary only for a smooth transition to a new, more efficient cryptography on elliptic curves ... (!)
Now we need to remember about Alfred Menezes, the second co-author of the article about "Puzzle, shrouded in a riddle." Canadian mathematician and cryptographer Menezes has been working at the University of Waterloo, one of the most famous centers of open academic cryptography, all his scientific life since the mid-1980s. It was here that in the 1980s, three university professors created Certicom, a company dedicated to the development and commercial promotion of cryptography on elliptic curves.

Accordingly, Alfred Menezes eventually became not only a prominent Certicom developer and author of several authoritative books on ECC crypto schemes, but also a co-author of several important patents describing ECC. Well, the NSA, in turn, when it launched its entire project called Suite B, previously purchased from Certicom a large (twenty-odd) package of patents covering “elliptical” cryptography.

This whole preamble was needed in order to explain why Koblitz and Menezes are precisely those people who, for natural reasons, considered themselves knowledgeable about the current affairs and plans of the NSA in the field of cryptographic information protection.
However, for them, the NSA initiative with a sharp change of course to post-quantum algorithms was a complete surprise. (!)
Back in the summer of 2015 (!) The NSA “quietly”, without explaining to anyone at all, removed the “P-256” ECC algorithm from its kit, while leaving it with its RSA equivalent with a 3072-bit module. Moreover, in the NSA's accompanying statements it was quite clearly said that all parties implementing the algorithms from Suite B now no longer make any sense to switch to ECC, but it is better to simply increase the RSA key lengths and wait until new post-quantum ciphers appear ...
But why? What is the reason for such a sharp rollback to the old RSA system? I do not think that such a serious organization will make such serious decisions, for no reason.
Koblitz and Menezes have every reason to consider themselves people competent in the field of cryptography on elliptic curves, but they did not hear absolutely anything about new hacking methods that compromised “their” crypto scheme. So everything that happens around ECC amazed mathematicians extremely.
People who have close contacts with this industry know that large corporations that provide cryptographic tasks and equipment for the US government always get some kind of advance warning about changing plans. But in this case there was nothing of the kind.
Even more unexpected was the fact that no one from the NSA addressed the people from NIST (USA), who are responsible for the open cryptographic standards of the state.

And finally, even the NSA’s own cryptographic mathematicians from the Information Security Administration (IAD) were extremely surprised by the surprise that the leadership presented them with their post-quantum initiative ...

It can be concluded that those very influential people who in the bowels of the NSA initiated a public change of course did this without any feedback and consultation, even with their own experts. It is to this conclusion that Koblitz and Menezes come in their analyzes. And they readily admit that in the end no one really understands the technical background of everything that happens here.
The conclusion suggests itself that there was some unknown activity, some hidden actors.

For an adequate perception of intrigue, it is very desirable to know that in fact the principles of public key cryptography were discovered almost simultaneously (in the 1970s) in two fundamentally different places at once. At first, a few years earlier, this was done by three secret cryptographs within the walls of the British secret service GCHQ, an analogue and the closest partner of the American NSA. But as it has long been wound up, everything was done in deep secrecy and "only for yourself."

The discovery was not made by GCHQ full-time employees, but by the mathematicians of the CESG unit, responsible for national ciphers and the protection of government communications systems in the UK. And the close interaction between the GCHQ and the NSA of the USA takes place primarily along the lines of joint intelligence activities. In other words, since the NSA also has its own IAD (Information Assurance Directorate) department, specializing in the development of cryptographic algorithms and information protection, the discovery of British colleagues was a complete surprise for the mathematicians of this unit. And for the first time they learned about it from their fellow spies who closely interact with the British ...

And when the same algorithms, in fact, based on factorization and discrete logarithms, regardless of the special services, were soon invented in the USA by open community researchers (Diffie, Hellman, Merkle, Raivest, Shamir, Adleman), the NSA made a huge effort to cram this genie back to the bottle.

Without revealing that the special service already has this math, the NSA chiefs simply tried in every possible way to prevent scientists from publishing this information widely. National security advocates have been pushing that strong cryptography is too serious a weapon, and their new public key encryption algorithms allow anyone, even people and parties who have never met each other, to be hidden from control.

As everyone knows, absolutely nothing with a ban on knowledge and gagging scientists at the NSA did not work. As a result, the open scientific community was very angry with the NSA. And besides, under the pressure of scientists and industry, it was not the spy intelligence service, but the civilian structure, NIST, USA, that began to lead the development and implementation of commercial cryptography in the country.

And although this story is very old, it is quite clearly repeated. Unless, of course, watch carefully.

The ETSI / IQC International Symposium on Quantum Secure Cryptography (in 2016), from which this story began, has several notable features.
Firstly, it was very solidly represented by the heads of important structures, special services of Great Britain, Canada, Germany. All these national special services are analogues of the American NSA. However, absolutely no one was mentioned explicitly from the NSA. And this, of course, is not an accident.

There is plenty of evidence, both from business leaders and directly from the heads of intelligence agencies, that after revelations from Edward Snowden, almost the entire US IT industry (not to mention other countries) reacts extremely negatively to NSA activities. In other words, at international forums discussing ways to strengthen cryptography in the light of new threats, it is now prudent for the NSA to simply not shine.

Another notable feature of what is happening is that this “workshop” in Toronto is not the first, but the fourth in a row. The first was in 2013 in Paris, and the second - especially interesting for us - took place in the fall of 2014 in the capital of Canada, Ottawa.
This event is interesting for the reason that there was a highly unusual report on behalf of the secret British secret service GCHQ (P. Campbell, M. Groves, D. Shepherd, "Soliloquy: A Cautionary Tale"). This is a report from the CESG information security division, which was personally made by Michael Groves, who leads cryptographic research at this intelligence agency.

It must be emphasized here that it is completely uncharacteristic for people from the British special services to talk about their secret developments at open conferences. However, this case was truly exceptional.

In his report, Groves not only said that British cryptographers have been developing quantum-safe algorithms for a long time, since the beginning of the 2000s.

At the same time, it is important that the decision to completely refuse (and not to strengthen-modernize the old design) was mainly made by the special services, due to a very powerful and very impressive attack by the British, developed back in 2013 (!) By a group of researchers from the open academic community . In the work of these authors: K. Eisentraeger, S. Hallgren, A. Kitaev, and F. Song. "A quantum algorithm for computing the unit group of an arbitrary degree number field." In STOC ACM, 2014, an essentially new quantum attack of a very general type is described, covering, in particular, a wide range of "post-quantum" crypto circuits, including Soliloquy, unknown to anyone at that time ...

The effect of this “half-open” speech by a large cryptographer of the British secret service turned out to be exactly as it was obviously intended. The information security industry and academy readily accepted CESG people as very knowledgeable consultants (who clearly demonstrated not only their “leading” competence, but also their willingness to share even their failure experience). At a forum in Toronto, the two CESG bosses were even entrusted with chairing sessions and moderating discussions. (!)

A completely different effect immediately manifested itself, usually accompanying any cooperation with special services. This refers to all excess of secrecy, attempts to drown out even the already published research results.

The story about the CESG grand cryptographer's performance at the open symposium was extremely sparingly covered in the media, and the article and presentation slides about Soliloquy can be found on the Web only to those who very clearly know what they are looking for (on the ETSI website, where these files are exclusively located, direct links to them are not detected).

But the most unpleasant is otherwise.

If anyone interested wants to get acquainted with the very article of scientists of the open community, which greatly impressed the British intelligence service, it quickly becomes clear that it is not so easy to find it. This article is not only on the site of scientific preprints Arxiv.org, where for a long time, along with physicists and mathematicians, both computer scientists and cryptographers are published. It is also not on the specialized site of purely cryptographic preprints Eprint.iacr.org, owned by IACR, or the International Association of Cryptographic Research. Moreover, each of the authors of the article we are interested in has many other publications on this and the other or even both of these sites.

But there is not only the work we need. Strange, but true.
Worse, if you set off to search for a file on the researchers ’personal web pages on university sites, an ambush awaits there too. The most famous of the co-authors, Aleksey Kitaev, is famous as a superstar in the horizon of quantum computing, has only a purely tangential relation to cryptography, and does not accumulate links to files of his publications anywhere.

Another co-author, Sean Holgren, really known as a cryptographer, like many other researchers, used to be used to post links to his publications on a university web page. But it was precisely on the article we were interested in that this case suddenly stopped. For all previous articles, files are available, but for the right one - only the name. For all subsequent publications 2015-2016. not even a name. Although such works are found in preprint archives ...

A truly complete list of everything that was, is, and will even be done (with appropriate links to files) is found only on the site of the youngest of the co-authors - named Fang Song. But, significantly, not on his university web pages, but on his personal website FangSong.info. And even here strange losses are revealed. We still have the PDF file with the variant of the article we are looking for, however, links to about the same file, but with names like "full version" and "Arxiv.org" turn out to be broken, looping back to the main page. That is, the files were clearly laid out by the author, but even here - as on the ArXiv site - inexplicably disappeared ...
All “disappearances” of this kind (quite a lot of similar cases) can be considered only with a very naive and superficial view of things. Most often, the explanation of what is happening is already contained in the headings of the articles, where the authors (in accordance with the rules instituted by scientists for a long time) are obliged to indicate the sources of financing and grants for the money of which the studies were conducted.

Specifically, in our case, the sponsor of the uniquely outstanding article on the new method of quantum cryptographic attack is (surprise!) The US National Security Agency. Well, "whoever pays for it dances," as you know. It is clear that the authors of the study themselves are always interested in the wide dissemination of their results, but their sponsors often have directly opposite goals ...

The only dark and really important point that has not yet been covered in this entire story is this.

What can be the relationship between the new, very effective (and very impressive special services) algorithm for opening all kinds of cryptosystems using a hypothetical quantum computer, and the hasty steps of the NSA to remove (back in 2015-2016) from cryptography circulation on elliptic curves? The connection here, as it turns out, is completely direct. But in order to notice it, again, one must carefully monitor what is happening.

When, at the turn of 2014-2015, the open community just became aware of the post-quantum Soliloquy algorithm from the British intelligence service, its subsequent compromise and the parallel invention of quantum attack, one of the very competent and knowledgeable cryptographers, Dan Bernstein, made an interesting generalization:
https://groups.google.com/forum/#!topic/cryptanalytic-algorithms/GdVfp5Kbdb8

Comparing all the facts known at that time, Bernstein put forward the assumption that in fact the new quantum algorithm from Holgren, Fang Song (and the company) also indicates the path to significantly more powerful attacks using traditional classical computers.

Moreover, on the basis of well-known, but very vague comments by the British, Bernstein concluded that the British special services know this, but prefer to keep it secret from everyone ...

And we know what happened afterwards. A few months later, in August 2015, the NSA suddenly surprised the whole cryptographic world with its sharp rejection of ECC cryptography with a relatively short key length.

The only ones who were hardly surprised were probably the cryptographers of the British intelligence service.

Well, six months later, at the beginning of 2016, already in the open cryptographic community, at least two independent publications from scientific researchers appeared, which in the most general terms confirmed Dan Bernstein's assumption:

1) Ronald Cramer, Léo Ducas, Chris Peikert, Oded Regev. "Recovering Short Generators of Principal Ideals in Cyclotomic Rings." In Eurocrypt 2016;

2) Jean-François Biasse and Fang Song, "Efficient quantum algorithms for computing class groups and solving the principal ideal problem in arbitrary degree number fields". In 27th ACM-SIAM Symposium on Discrete Algorithms).

In other words, it has now been rigorously and for everyone shown that yes, indeed, the new purely “quantum” approaches to solving difficult cryptographic problems, in fact, can significantly reduce labor costs when breaking cryptoschemes using classical computers.

Specifically, nothing has been openly announced yet about compromising the ECC scheme.

Or maybe you don’t need to do this?
Let's think together whether this is beneficial to the one who is aware?

But this, it seems, is only a matter of time.





I am amazed no one has mentioned there microsoft cause it's one of the early adopter among huge companies. Passwordless authentication is good at some point cause makes it's more harder to get victim of hackers or phishing and etc thanks to Multi Factor Authentication. I think if you are interested in it, you must read what's written on this page of Microsoft and also watch videos, link here: https://www.microsoft.com/en-us/security/technology/identity-access-management/passwordless
I agree with OP, we really need something like that and I am amazed why some companies haven't even think about that, especially Ledger and etc which aim security of crypto wallets.
------------------------
I read the Microsoft passwordless authentication materials, but in fact there is multi-password authentication, without innovations.

What can we say about Microsoft - it is always true to its traditions, making strange software. Their main product is Windows OS, always in holes, monthly, weekly, until its change, they update it, always hundreds of holes in the security system. If I managed such a company, I would hide my face.

It has long been noticed that the higher the salary, the less time left for reflection.

They faithfully combined all the old authentication technologies that they knew in one software product, only made their protocol and a model document for sale, for advertising. The perfect endless business scheme.
By the way, I accidentally thought, is not their main goal money?

These guys can sell something that no one else can sell.

Seriously, biometrics are the easiest fake identifier. This is a lot of news from serious organizations with a demonstration of experiments. I do not want to advertise it all. Anyone who wants to find himself (and in the public domain as well) programs that will depict both your faces, your “fingers” and your “eyes”. This is generally primitive. Of all that they crammed into their "passwordless" authentication, the most reliable element is the password and its semantic analogue is the key.

Having made a mistake, they write the opposite, on the first page of their advertising document, the following:

Passwords are no longer enough IT around the world see the beginning of a new era, where passwords are considered as a relic of the past. The costs now outweigh the benefits of using passwords, which increasingly become predictable and leave users vulnerable to theft. Even the strongest passwords are easily phishable. The motives to eliminate authentication systems using passwords are
endlessly compelling and all too familiar to every enterprise ITorganization. But how do you get there?
For enterprise IT departments, nothing costs more than password support and maintenance. It’s common practice for IT to attempt lessening password risk by employing stronger password complexity and demanding more frequent password changes. However, these tactics drive up IT help desk costs while leading to poor user experiences related to passwordreset requirements. Most importantly, this approach isn’t enough for current cybersecurity threats and doesn’t deliver on organizational information security needs.

It is difficult to understand ingenious people, especially what they do.



I sure nobody still invented better than OAuth2 over HTTPS. It is absolutely simple and it really works
---------------------------
As I answered you earlier, OAuth 2.0 authorization. Is a protocol created on the basis of dangerous legacy technologies.

Now you can expand the answer, so that it would be clear that the new names to regret do not guarantee new qualities for the user.

But the essence is well confused.

Here is material from common sources, I am not the author of these thoughts:

The third generation of OpenID technology, which is an authentication add-on over the OAuth 2.0 authorization protocol. OpenID Connect allows Internet resources to verify the identity of the user based on the authentication performed by the authorization server.

one.
Phishing attacks. Some researchers believe that the OpenID protocol is vulnerable to phishing attacks when instead of a provider, attackers send the end user to a site with a similar design ... As a result, attackers can present themselves to Internet resources as a given user and gain access to their information stored on these resources.

Phishing attacks are also possible when a site that supports OpenID authentication is faked in order to obtain user information from the provider.

Important:

OpenID does not contain mechanisms to prevent phishing attacks. Responsibility for phishing attacks is shifted to OpenID providers.

2.
Man in the middle attack with an unprotected connection.
... To redirect the user from himself to the Internet service, the provider gives the user a special URL. The problem is that anyone who can get this URL (for example, by sniffing a twisted pair) can play it and gain access to the site as a user.

3.
Some providers use Nonce code to protect against this attack, which allows you to use this URL only once. The nons solution only works when the User first uses the URL. However, an attacker who is listening on the communication channel and is located between the user and the provider can obtain the URL and immediately terminate the user's TCP connection, and then perform an attack. Thus, one-time codes protect only from passive intruders, but cannot prevent the attacks of an active attacker.

4.
Reuse of identifier.
The user can change the OpenID provider, thus freeing his identifier from the previous provider. A new user can take this identifier and use it on the same sites as the previous user. This will give the new user access to all the information associated with this identifier. This situation may occur by accident - it is not necessary that the new user be an attacker and want to gain access to the specified information.

5.
Authentication Errors.
In 2012, researchers published a paper describing two vulnerabilities in OpenID. Both vulnerabilities allow an attacker to gain access to the victim’s account.

The first vulnerability exploits the OpenID Attribute Exchange. The problem is that some Internet services do not check the data transmitted through Attribute Exchange. According to the researchers' report, many popular sites, including Yahoo! Mail

The second vulnerability is related to an error on the provider's side and also allows access to the account on the site of the dependent party.

So how many old do not form, you will not receive good new.



I sure nobody still invented better than OAuth2 over HTTPS. It is absolutely simple and it really works
-------------------
And these are facts confirming the above about the quality of Microsoft OAuth 2.0!

Do you think they all tell us that there is a hole in it?

Read:

Security researchers from CyberArk, an Israeli company, have discovered a vulnerability in the Microsoft Azure cloud service. The problem affects certain applications that use the Microsoft OAuth 2.0 authorization protocol, and its operation allows you to create tokens for entering the system. In this way, attackers can take control of victims' accounts and act on their behalf.

Experts have discovered several Azure applications released by Microsoft that are vulnerable to this type of attack. If an attacker gains control over domains and URLs that Microsoft trusts, these applications will allow him to trick the victim into automatically generating access tokens with user permissions. It is enough for the criminal to use simple methods of social engineering to force the victim to click on the link or go to a malicious website. In some cases, an attack can be carried out without user interaction. A malicious web site that hides the embedded page may automatically trigger a request to steal a token from a user account.

Such applications have an advantage over others, as they are automatically approved in any Microsoft account and, therefore, do not require user consent to create tokens.

Be careful with products that advertise "software authorities."
full member
Activity: 224
Merit: 120
In addition to the benefits for the user, because you can not steal the key, there are advantages for the blockchain itself, in general.

Here are the three principles of this keyless technology, built on geometry, not mathematics:

1) a chain of state sequences;
2) the presence of all links of the chain (blocks)
3) the absolute dependence of each new link (state of space) on all the information used for the exchange

- correspond to the definition of the classic “blockchain”: “a continuous sequential chain of blocks built up according to certain rules (linked list)”, with the important difference that there are no blocks as such, they all correspond to existing system states that need not be saved (unlike blocks).


--------------------------------------------------
   classic blockchain      alternative blockchain
1) No parallelization, no synergy, no mutual assistance - only duplication, and immediately (continuously) million times/
1)   Copying or partial copying, distribution of parts of the system between any number of users, node or super nods, central server - no restrictions, the weight of the system does not change as many times as its direct and continuous use

2) All blocks are linked by a cryptographic signature in chronological order in a single chain, complex mathematical algorithms are responsible for this   
2) All blocks (states) are linked by an analogue of a cryptographic signature (the Vernam cipher level), not complex algorithms are responsible for this.

3) Attempting to integrate current payment networks into a blockchain can be so complex that no one will even try to go this way.   
3)The problem of overloading computing power and existing networks is absent due to the complete lack of scalability in this technology.

4) Currently, there are more than 1,400 digital coins, many of which have their own versions of the blockchain, each with its own “+” and “-”   
4) It makes no sense to create such a number of technology options in the case of its use in cryptocurrencies, since The technology is free from the main disadvantages of any variant of the classic blockchain.

5) To prevent an attack, you need to use complex security keys and two-factor authentication, there is a "human factor".   Each data packet not only carries information, but also performs (as a 100% hash) the verification function of each previously received and current data packet, there is no “human factor”
In the current reality, the blockchain's “eternity” is limited to a dozen years - the increase in the capacity of hard drives definitely does not keep pace with the growth in blockchain volume   
5) The system does not scale to any bit depending on any number of transactions, but increases when a new unit appears

6) Very low speed of operations, hung stocks, miners are combined into pools - the problem of 51% is becoming more urgent   
6) The speed of operations depends only on the number of nodes, there is no problem confirming all the “blocks”, a very high and stable performance
hero member
Activity: 2352
Merit: 905
Metawin.com - Truly the best casino ever
I am amazed no one has mentioned there microsoft cause it's one of the early adopter among huge companies. Passwordless authentication is good at some point cause makes it's more harder to get victim of hackers or phishing and etc thanks to Multi Factor Authentication. I think if you are interested in it, you must read what's written on this page of Microsoft and also watch videos, link here: https://www.microsoft.com/en-us/security/technology/identity-access-management/passwordless
I agree with OP, we really need something like that and I am amazed why some companies haven't even think about that, especially Ledger and etc which aim security of crypto wallets.
full member
Activity: 224
Merit: 120
It might be a solution to many problems concerning security in access in terms of technology. But in my opinion it doesn't allow users to recover accounts whenever in case an accident happened. In terms of bitcoin that uses wallet address and private key, we need to physically write or digitally save the information for us to retrieve our account. This technology might be possible and suits other platforms but I don't see its positive implication to cryptocurrency because it already uses strong encryption in hashes through the blockchain.
---------------------------
As for the use of keyless technologies in cryptocurrency wallets, such projects are still possible, theoretically. Here is an example:
https://toxic.chat/
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
I sure nobody still invented better than OAuth2 over HTTPS. It is absolutely simple and it really works
Pages:
Jump to: