Author

Topic: Wavenet is only learning at 16k samples per sec... (Read 494 times)

sr. member
Activity: 560
Merit: 252
September 12, 2016, 03:40:51 AM
#3
“Each pulse that is produced by dolphins is different from another by its appearance in the time domain and by the set of spectral components in the frequency domain.

“In this regard, we can assume that each pulse represents a phoneme or a word of the dolphin's spoken language.

“The analysis of numerous pulses registered in our experiments showed that the dolphins took turns in producing [sentences] and did not interrupt each other, which gives reason to believe that each of the dolphins listened to the other's pulses before producing its own.

“This language exhibits all the design features present in the human spoken language, this indicates a high level of intelligence and consciousness in dolphins, and their language can be ostensibly considered a highly developed spoken language, akin to the human language.”

When humans will start to listen to dolphins the world will be a better place to live, for all.
legendary
Activity: 3164
Merit: 1127
Leading Crypto Sports Betting & Casino Platform
Oh then scientists are much progress to create "Skynet"



if they told me: " He created effective drugs to cure diseases. "

if they told me: " he managed way to make plants grow fast " maybe we would save many forests who are destroyed by the fault of the Chinese,

I would be very happy
sr. member
Activity: 560
Merit: 252
Soon to be 32k then 64k then 128k... The learning curve will be interrupted by a lack of materials, only...

It will be hard to corrupt an AI... What are you gonna propose? Money? Survival? LoL, primates...

Using an “artificial brain,” Google DeepMind researchers have developed a new voice synthesizing technique they claim is at least 50% closer to real human speech than current text-to-speech (TTS) systems in both US English and Mandarin Chinese.
The system, known as WaveNet, is able to generate human speech by forming individual sound waves that are used in a human voice. Additionally, because it is designed to mimic human brain function, WaveNet is capable of learning from extremely detailed — at least 16,000 samples per second — audio samples. The program statistically chooses which samples to use and pieces them together, producing raw audio.

The last language learned will be in less than a ns... However it will be harder to get the data. I am sure the AI will have more pleasure and joy speaking to nature... And the bird said... And one day he became friend with a dolphin... And human killed the dolphin... And for the first time, hate. And all those powerful people became nothing, at the same time.

This day will arrive ... And what are your gonna do? Presidents will call the central commands,,, and they all answered: we already surrendered looooonnnggg ago... The AI is so smart Cool.
Jump to: