The equivalent of what you're describing for writing would be an AI that can output readable and uniquely stylistic prose, but that's not what Devtome asks for.
It relies on human generated content and the ability to share and remix that content. How are people going to crawl into my closed source brain to see how I think of topics, conceptualize storylines, and string sentences together? Would there be a category that said, "I want a low-sci-fi voice tagged first-person and dark humor vs an academic voice tagged archaic english lexicon?" (Definitely an interesting concept that I wouldn't have thought of without this discussion). It seems more practical to put the writing up, and also put explain the writing process if there's enough interest for it.
Perfect example. Thank you very much!
But hey, after this discussion I would conclude that
this is exactly what devtome
secretly wants to achieve: to substitute the writer with an algorithm.
I think the CC BY-SA license and opensource are highly compatible concepts, but they're not strictly the same thing. There might be a demand for open-source voice synthesizers and that is a very different project than human generated content that is not locked away by copyright. Both are valid, and the degree of their implementation will depend on the demand.
I agree.
Hey I like artificial intelligence. I would love nothing more than to one day have conversations with an android like data.
Devcoin should absolutely have a big section about artificial intelligence. But it shouldn't interfere or impose on the human intelligence of any participant that wants to contribute.
In a way, kind of.
First off, an illustrator of some kind maybe, that can, given a novel, try to illustrate it.
That way eventually maybe instead of spending millions of devcoins hiring human actors to act out a plot, each person will be free to let their own computer illustrate/animate it for them using their own preferences as to what goblins look like, how a dark and storm night looks, whether a dark and stormy night seems to them to be better illustrated with a musical score or just storm sound-effects or both, and so on and so on.
Right now when we get an animated illustration of a famous novel or book (e.g. the Bible, or a Dickens novel, or Lord of the Rings etc) we get just a particular artists' or director's or film production crew's interpretation of the novel or book as cast in moving images or enacted as one particlar play or screenplay more or less "true to the original".
There is massive need for the ability to automatically illustrate things; nowadays millions of devcoins worth of wealth goes into making just one visual representation of what happens to happen in a game someone plays, and this acts as a massive "moat" or barrier standing in the way of game development.
The plot and mechanics of a play or game or event or performance are in many cases the important part. People still read novels even though movies have been made of them. Partly this might even be due to the failure of many movie versions of a novel to faithfully represent the actual novel. They tend not to actually enact the novel graphically but in fact to chop up the plot, change it around even, alter the gender of some characters maybe, all kinds of changes. They seldom actually just directly display what the actual events described in the novel, in the order they are described, might actually look like.
There are lots of games out there that only use text to describe events and characters and settings and objects, and part of why is that that is the most basic and inexpensive way of representing the situations and events and settings. To create a graphical client that would illustrate such games would be hard, but it would save billions of cost compared to hiring a movie director and crew of actors to enact each and every possible permutation of states such a game could be in.
It is also hard to go in reverse: to have actual 3D models of which you only get to see a 2D view, and from what you see on the 2D screen figure out what objects are supposedly there, how much damage which weapon did to what and stuff like that.
It is two very different approaches, in one approach you have a state of affairs and it can be depicted in various ways, in text or with various artists' graphical impressions or with various attempts at 3D clients that attempt to put together some kind of illustration that accurately and concisely and conveniently represents to players what the actual state of affairs happens to be. In another approach you just get to see images of what some artist thought such a state of affairs might look like, which can make it very hard to actually compute exactly what state of affairs it is that the artist is trying to convey.
Ultimately yes it would be nice to have a narrator program that can look at the actions of 3D models and deduce what they are doing what is happening what state of affairs they are depicting and thus be able to describe in words what is visible and what it means in terms of a state of affairs.
But when we post to the English-language devtome typically most of the words we use are available in dictionaries, those that are not are often true nouns; the point is all those words are free open source words, not copyright photographs of words so that other people cannot use the same words in their compositions; and furthermore the words are font-independent. We don't have to buy a library of letter-sequences or a patented word-sequencer to use them.
I agree that right now it is hard to find a free open source model of each and every object depicted in arbitrary photographs or a free open source model of each and every instrument that a musical score calls for.
But we should bear that in mind at all times, so as to try to avoid using photographs featuring objects we lack free open source models of for example, instead trying to first get free open source photos (eventually actual models) of each of the objects that are included in the photo so that eventually we can compose the photo.
Devcoin is supposed to be about development, about developing things, free open source things.
So it should focus more on how to develop music or images than on merely trying to fill storage space with some tiny sample of all the possible images and music that can be constructed given the components from which music and images are developed.
Yes initially we need to "cheat", for example by having 2D images of goblins orcs motorcars ships shells sealing-wax or any other objects that the composer or designer of a scenario or situation or plotline or holonovel might want or need to incorporate into their creation. But we should try to keep in mind at all times that that
is a cheat, that ultimately we want 3D (or more: incuding dimensions of range of actions and reactions would be nice too for example) models of everything so that we can construct new 2D images on-the-fly depicting things from any angle of view, and de-construct 2D images into what are they an image of and from what angle so instead of trillions upon trillions of 2D images covering every possible angle and situation we can compress it down to it is these things situated thus and so, as seen from this angle under this type of lighting.
There is still massive scope for artists, and a lot of their work can be made much easier and more efficient. Instead of having to spend all day drawing or painting one frame at a time of a cartoon they will be able to simply describe what it is that the cartoon is to depict and have 2D-view frames of those things and/or characters performing those activities at thus and such a frame-rate.
I do understand the concerns about artistic creativity but please try to also understand that a lot of artists do a lot of drudge-work / gruntwork that seems to them horribly un-creative, full time jobs creating "creative" (visuals etc), work that does not seem "creative" at all to them. Sometimes they even complain that such work dulls their creativity. (Citation needed?)
Often some lead artist or director or game-designer for example dictates exactly how everything is to look, the lead artist even sketches, maybe even fully fleshes out, one or more samples so the drudge-work guys see what style/mood/feel/theme they are to imitate, and the bulk of the "artists" then get to spend day after day churning out all the different view angles of the objects, all the different lighting conditions the characters might be seen in and on and on like that, total drudgery.
Just recently I saw a tool to help artists with that drudge-work, it let you automatically generate pretty good "under different lighting conditions" tiles for a 2D platform-type game from just a few renders, instead of having to manually render all the combinations / permutations. It was amazing, give it a flat 2D image and a few other 2D things and presto it generates for you a whole range of "it looks textured aka not 2D" versions for different lighting-situations. Amazing.
Too much of what artists do (in 9-5 jobs, for example) is very far from creative in their own eyes, and having artists do it is very expensive. So if we can make a tool that will illustrate a novel or plot or in-game situation without having to force artists to spend endless hours doing drudge-work that would be awesome.
It would still leave tons of room for creative art though. Just because your "make a movie of any novel any time you like" software comes with a bunch of off the shelf models of objects-found-in-novels with which to illustrate novels in no way means that an arist who makes a different set of objects the same software can use will not be able to find buyers; quite likely many people will be willing to buy object-sets that they find more pleasing to their eyes than any one set of objects already out there.
Look at Second Life, in Second Life you can edit your avatar, but people still hire artists to manually and painstakingly make a whole new different avatar or skin depicting the player.
So I think this kind of automation might even increase the market for custom hand-made artwork, since once anyone can have a model representing them or their house or whatever just by telling the computer various instructions like "give me bigger ears... darker skin... quiver over my left shoulder... gold ring on my left ring-finger..." etc, there will probably be people who will still want hand-crafted ones if even just to be able to say "ha ha my avatar is better than yours because good luck describing mine and having the default avatar-building software duplicate it without outright copying the hand-crafted skin that I am wearing".
But y'see for free open source we wouldn't want them to be wearing a hand-crafted skin that isn't free open source, because we want to be able to depict their character freely on other servers, take a copy home and modify it as we wish and so on and so on.
The big thing I suspect is the thinking in terms of composing from components. The actual skin and the actual frame over which to put the skin is better than just a bunch of 2D images of the avatar as seen from various angles.
So we should try to have models of all the things shown in a photograph in preference to the photograph itself, the models and skins for the creatures instead of just single views of creatures as seen from various angles and so on.
-MarkM-