Pages:
Author

Topic: POW via training/validating Deep Neural Networks - page 2. (Read 373 times)

legendary
Activity: 3038
Merit: 2166
Playgram - The Telegram Casino
monsterer2 and ETFbitcoin pretty much stated the core of the matter. To expand on what they pointed out:

1) How to provide viable problems in a decentralized manner? Just picking one up at random from a previously agreed-upon set is not enough -- who provides the set? How does the set get agreed upon?

2) Requiring the likes of Docker and Kubernetes to verify transactions adds quite a overhead for running nodes. Also this opens up the question of how the datasets are provided to validating nodes in a tamper-proof and reliable way. Additionally the datasets would increase the overhead for running nodes even further.

3) It seems like you are suggesting block times of 1 hour which, given the flak Bitcoin occasionally gets for its 10 minutes, would definitely need to get reduced if such a cryptocurrency were to gain any form of traction.

4) How to keep block times steady? How to reliably know when 1 hour has passed without having to rely on an external, centralized oracle? Traditional PoW can easily quantify how much work is to be put into a block to keep block intervals steady. Time is derived from the timestamp of said blocks, without an external time source. How to quantify how much deep-learning PoW is to be put into a block?
full member
Activity: 351
Merit: 134
We at GDOC (Global Data Ownership Chain) are contemplating this approach, but would like to solicit inputs from the powers that may be.

Thank you for sharing your ideas and feedback.

Not easily achievable IMO. For PoW, you need two characteristics:

1) The solution must apply to data available 'within' the chain
2) Any proposed solution must be easily verifiable using only the solution itself

What you're proposing fails both of these two requirements.
newbie
Activity: 8
Merit: 0
which phase of GDOC that DNN would be deployed to ?
how much&what impacts would be come out after DNN's deployment for GDOC?

jr. member
Activity: 168
Merit: 3
#Please, read:Daniel Ellsberg,-The Doomsday *wk
with back propagation we need lots of pull/push weight ...

quick recap

For symbolists, all intelligence can be reduced to manipulating symbols, in the same way that a mathematician solves equations by replacing expressions by other expressions. Symbolists understand that you can’t learn from scratch: you need some initial knowledge to go with the data. They’ve figured out how to incorporate preexisting knowledge into learning, and how to combine different pieces of knowledge on the fly in order to solve new problems. Their master algorithm is inverse deduction, which figures out what knowledge is missing in order to make a deduction go through, and then makes it as general as possible.

For connectionists, learning is what the brain does, and so what we need to do is reverse engineer it. The brain learns by adjusting the strengths of connections between neurons, and the crucial problem is figuring out which connections are to blame for which errors and changing them accordingly. The connectionists’ master algorithm is backpropagation, which compares a system’s output with the desired one and then successively changes the connections in layer after layer of neurons so as to bring the output closer to what it should be.

Evolutionaries believe that the mother of all learning is natural selection. If it made us, it can make anything, and all we need to do is simulate it on the computer. The key problem that evolutionaries solve is learning structure: not just adjusting parameters, like backpropagation does, but creating the brain that those adjustments can then fine-tune. The incomplete, and even contradictory information without falling apart.

The solution is probabilistic inference, and the master algorithm is Bayes’ theorem and its derivates. Bayes’ theorem tells us how to incorporate new evidence into our beliefs, and probabilistic inference algorithms do that as efficiently as possible.

For analogizers, the key to learning is recognizing similarities between situations and thereby inferring other similarities. If two patients have similar symptoms, perhaps they have the same disease. The key problem is judging how similar two things are. The analogizers’ master algorithm is the support vector machine, which figures out which experiences to remember and how to combine them to make new predictions.

Domingos, Pedro
AAAI
newbie
Activity: 15
Merit: 3
Has anyone seen work attempted on using deep neural networks (DNN) training/validation as POW?

The basic idea is to let everyone solve a randomly picked (hashed) useful but difficult machine learning problem, which is in continuous supply, e.g., detecting top-1000 wanted fugitives in hundreds of thousands of live streaming public video footage.

POW can be in the form of DNN model optimization, where done work is submitted in the form of a Docker file plus a data model file containing the neural network trained weights and configuration file, that can be validated by anyone using the agreed-upon training dataset and validation methodology and running Docker or Kubernetes.

The lowest achieved 10-fold Cross-Validated error that has not been surpassed within a fixed period of say 1 hours, will be confirmed as the winner of POW, and vested to package transactions into the next block. This is reminiscent of the netflix challenge, except that here the train/test data is open.

Advantage of this approach:

1. ASIC resistant, because DNN are too varied and complex, and requires a full docker image to compute/deploy
2. Achieve a greater good, doing useful work for humanity
3. Promote machine learning / AI

We at GDOC (Global Data Ownership Chain) are contemplating this approach, but would like to solicit inputs from the powers that may be.

Thank you for sharing your ideas and feedback.
Pages:
Jump to: