Pages:
Author

Topic: kongzi.ca going live -- investment/presales opportunities - page 2. (Read 7805 times)

legendary
Activity: 910
Merit: 1000
Quality Printing Services by Federal Reserve Bank

Oliver Richman [email protected] (usagi, https://www.google.com/search?q=usagi+%22Oliver+Richman%22, know troll in mailing lists and forums) is about 40 years old ...

Quote from: EskimoBob link=http://www.amur.eu/user/KrokodillG.html
Name   Krokodill Gena
 City   Tartu
 Country   Estonia
 Age   46
 Height   180
 Weight   98
 Star sign   Libra

Krokodill Gena is about 40 years old too. Oh wait, that's you. So you're married with kids ehh? Me too. Got 2 little ones myself, 3 and 5. You list your religion as Buddhist. I was a buddhist for about 20 years myself. I was wondering, have you taken the buddhist precepts yet?

A quote, if I may:

Quote
3. Right Speech

Right speech is the first principle of ethical conduct in the eightfold path. Ethical conduct is viewed as a guideline to moral discipline, which supports the other principles of the path. This aspect is not self-sufficient, however, essential, because mental purification can only be achieved through the cultivation of ethical conduct. The importance of speech in the context of Buddhist ethics is obvious: words can break or save lives, make enemies or friends, start war or create peace. Buddha explained right speech as follows: 1. to abstain from false speech, especially not to tell deliberate lies and not to speak deceitfully, 2. to abstain from slanderous speech and not to use words maliciously against others, 3. to abstain from harsh words that offend or hurt others, and 4. to abstain from idle chatter that lacks purpose or depth. Positively phrased, this means to tell the truth, to speak friendly, warm, and gently and to talk only when necessary.
-http://www.thebigview.com/buddhism/eightfoldpath.html

LOL. Did you make that profile? Probably not, because it's from 2011. I must say, this IS actually funny. What makes it even funnier is this, that I have never even heard of this site. Thank you for the info and I hope you spent hours searching for this. LOL.
Now I have to figure out, how I can log in to that account and find my ever lasting happiness. Smiley
Thank you usagi, time well spent!
vip
Activity: 812
Merit: 1000
13

Oliver Richman [email protected] (usagi, https://www.google.com/search?q=usagi+%22Oliver+Richman%22, know troll in mailing lists and forums) is about 40 years old ...

Quote from: EskimoBob link=http://www.amur.eu/user/KrokodillG.html
Name   Krokodill Gena
 City   Tartu
 Country   Estonia
 Age   46
 Height   180
 Weight   98
 Star sign   Libra

Krokodill Gena is about 40 years old too. Oh wait, that's you. So you're married with kids ehh? Me too. Got 2 little ones myself, 3 and 5. You list your religion as Buddhist. I was a buddhist for about 20 years myself. I was wondering, have you taken the buddhist precepts yet?

A quote, if I may:

Quote
3. Right Speech

Right speech is the first principle of ethical conduct in the eightfold path. Ethical conduct is viewed as a guideline to moral discipline, which supports the other principles of the path. This aspect is not self-sufficient, however, essential, because mental purification can only be achieved through the cultivation of ethical conduct. The importance of speech in the context of Buddhist ethics is obvious: words can break or save lives, make enemies or friends, start war or create peace. Buddha explained right speech as follows: 1. to abstain from false speech, especially not to tell deliberate lies and not to speak deceitfully, 2. to abstain from slanderous speech and not to use words maliciously against others, 3. to abstain from harsh words that offend or hurt others, and 4. to abstain from idle chatter that lacks purpose or depth. Positively phrased, this means to tell the truth, to speak friendly, warm, and gently and to talk only when necessary.
-http://www.thebigview.com/buddhism/eightfoldpath.html
vip
Activity: 812
Merit: 1000
13
Announcement!

I've just done the basic form of the browse entries screen. You can check it out here:

http://kongzi.ca/dict/browse.php

Try this short link which takes you directly to a keyword search for "test".

http://kongzi.ca/dict/browse.php?action=browse&keyword=test

(note; you may have to set your source language to English and target to Japanese to see this)

Over the coming days and weeks, more and more wonderful features will be added!
vip
Activity: 812
Merit: 1000
13
if (i != k) {" is a totally unnecessary line of code

Look at how k is defined.  There's no way k can ever equal i - it starts off at i+1 and gets larger.

You're right, I haven't looked at that section of the code for an extremely long time. It is probably a holdover from before k started at i+1.

TagNode i_tagnode = (TagNode) getChildAt(i);
 String i_childname = i_tagnode.getName();

These are being recreated unnecessarily every time through the inner (k) loop.  If the issue is that the app is multi-threaded and the content of node i/k could change during execution of the function call then there'd be a seperate, much more serious issue.

No, because the java specification states that such arrays are re-used.

String k_childname = k_tagnode.getName();
if (i_childname.equals(k_childname)) {

Creating a temp variable that will only be used in precisely one function call is code bloat.  Why not replace these 2 lines with:

if (i_childname.equals(k_tagnode.getName())) {


Probably a leftover from when I was using that variable to do something else. it was easier to debug by having all the variables defined at the start of the loop. That's just good coding practice. It's how I was taught in University and College, and I've found it helps me see what is going on a little more clearly.

Not too clear on exactly what you're doing - but comparison of 2 objects of the same class shouldn't really be needing any temporary variables at all.  If you're doing a lot of string comparisons then consider using a string class with reference counters - so at least the overhead of creating temp variables/copies of identical strings has a lot less overhead.  Yeah - it likely makes no noticable performance difference but it's just bad practice to spew temp variables all over the place (including creating two repeatedly in an unnecessarily tight scope).

You are welcome to your opinion. In the end, these are just stylistic differences. If I became obsessed with hand-optimizing the code itself, I'd probably go back and make those changes. I was always too busy developing the logic though, to worry too much about hand-optimization. As you are probably aware most compilers contain optimizations far beyond what the average programmer remembers to do... not to mention the fact that real optimization has nothing to do with lines of code and more to do with using a profiler and looking at the big-o notation of your algorithms.....

The first point (a check that can never be met of i!=k) is precisely why I dislike SLOC so much.  Your code with it in with be considered more/better work than mine without it.  If the concern is to somehow be sure that you aren't comparing the same object to itself then that should be addressed by a specific member function or operator that threw an exception when it happened: if you want to check for something that you know should never happen then do it properly so you can identify when it happens.

Sure, everyone's code contains errors. No one is perfect. Feel free to post some code you've written so we can go over it. None of this is relevant to anything anyways so I'm not sure why you're so hung up on it.
hero member
Activity: 532
Merit: 500

I get your point but SLOC is actually a useful metric, as there's a huge, huge difference in the amount of skill it takes to program a system with 10k statements vs. 1k or 100k. Your example above is contrived. Here is a sample of code from the tag tree system of kongzi:

Code:
    public boolean merge_children_worker() {
        for (int i = 0; i < getChildCount(); i++) {
            for (int k = i + 1; k < getChildCount(); k++) {
                if (i != k) {
                    TagNode i_tagnode = (TagNode) getChildAt(i);
                    TagNode k_tagnode = (TagNode) getChildAt(k);
                    String i_childname = i_tagnode.getName();
                    String k_childname = k_tagnode.getName();

                    if (i_childname.equals(k_childname)) {
                        i_tagnode.eatChildren(k_tagnode);
                        Kongzi.dict.replaceTag(k_tagnode, i_tagnode);

                         k_tagnode.removeFromParent();

                         return true;
                    }
                }
            }
        }

        return false;
    }

This operates in a second thread inside a recursive method. As you can see we are not dealing with T = T + 1 here.


if (i != k) {" is a totally unnecessary line of code

Look at how k is defined.  There's no way k can ever equal i - it starts off at i+1 and gets larger.

TagNode i_tagnode = (TagNode) getChildAt(i);
 String i_childname = i_tagnode.getName();

These are being recreated unnecessarily every time through the inner (k) loop.  If the issue is that the app is multi-threaded and the content of node i/k could change during execution of the function call then there'd be a seperate, much more serious issue.

String k_childname = k_tagnode.getName();
if (i_childname.equals(k_childname)) {

Creating a temp variable that will only be used in precisely one function call is code bloat.  Why not replace these 2 lines with:

if (i_childname.equals(k_tagnode.getName())) {

Not too clear on exactly what you're doing - but comparison of 2 objects of the same class shouldn't really be needing any temporary variables at all.  If you're doing a lot of string comparisons then consider using a string class with reference counters - so at least the overhead of creating temp variables/copies of identical strings has a lot less overhead.  Yeah - it likely makes no noticable performance difference but it's just bad practice to spew temp variables all over the place (including creating two repeatedly in an unnecessarily tight scope).

The first point (a check that can never be met of i!=k) is precisely why I dislike SLOC so much.  Your code with it in with be considered more/better work than mine without it.  If the concern is to somehow be sure that you aren't comparing the same object to itself then that should be addressed by a specific member function or operator that threw an exception when it happened: if you want to check for something that you know should never happen then do it properly so you can identify when it happens.
vip
Activity: 812
Merit: 1000
13
usagi, your rebuttal is complete and utter nonsense. Are you aware of the KISS (keep it simple, stupid!) principle? Because the number of lines in your source code and your arguments indicate that you are incompetent in both programming and logical argumentation.

The only point is that it was a huge project. You are demonstrating a massive amount of cognitive dissonance right now. Guess what, I'm not a scammer, and I have real skills that allow me to be able to create value for this community. You should probably apologize to me now.

You see, there are other concerns that I'd rather discuss than your misunderstanding of SLOC. For example, can the following pseudocode be refactored?

Code:
    function() {
        Iterator i = an.iterator();
        while (i.hasNext()) {
            TreePath tp = (TreePath) i.next();
            conclusion.addAll(getSelectedByTag(tp));
        }

        conclusion = SetWorks.uniqueList(conclusion);

        i = an.iterator();
        while (i.hasNext()) {
            TreePath tp = (TreePath) i.next();
            if (tp == checker(tp))
                conclusion.add(getSelectedByTag(tp));
        }

This was constructed to show what happens in QuizEngine.java's getPossibles() method. There is a string of about 10 different checks like this which are performed in order. You might think oh, I can reduce SLOC by merging them all into one loop. The simplified example above makes it very clear why this should not be done however... the list gets too large during the early stages so loops which add values and loops which sort or uniq need to be sequenced to speed up processing time. On average, this tends to set the size of what needs to be done to a certain SLOC because skilled programmers tend to do things in a certain way. At any rate I've been over every inch of my program with a profiler a dozen and one times so I'm pretty sure that things have been done properly and that a model like COCOMO would provide a ballpark estimate of what I have done.

And why not? What special reason is there that it shouldn't?
vip
Activity: 812
Merit: 1000
13
As it happens I agree with Wheeler (and yourself it seems) that they shouldn't - but there's no standard definition for it and tbh I don't see to much point in wasting effort on the definition of an essentially meaningless metric.

Then stop arguing about it.

Consider the following two (pseudo)code samples:

Sample 1:

X=Y*P/100;


Sample 2:

T=P;
T=T/100;
X=Y;
X=Y*T;


Both do exactly the same thing (set X to equalling P% of Y).  Under pretty much any measure of (physical or logical) SLOC sample 2 has 4 times the count of sample 1.  Is it really 4 times as much effort, 4 times as good or does it represent 4 * as much of ANY useful measure?

I get your point but SLOC is actually a useful metric, as there's a huge, huge difference in the amount of skill it takes to program a system with 10k statements vs. 1k or 100k. Your example above is contrived. Here is a sample of code from the tag tree system of kongzi:

Code:
    public boolean merge_children_worker() {
        for (int i = 0; i < getChildCount(); i++) {
            for (int k = i + 1; k < getChildCount(); k++) {
                if (i != k) {
                    TagNode i_tagnode = (TagNode) getChildAt(i);
                    TagNode k_tagnode = (TagNode) getChildAt(k);
                    String i_childname = i_tagnode.getName();
                    String k_childname = k_tagnode.getName();

                    if (i_childname.equals(k_childname)) {
                        i_tagnode.eatChildren(k_tagnode);
                        Kongzi.dict.replaceTag(k_tagnode, i_tagnode);

                         k_tagnode.removeFromParent();

                         return true;
                    }
                }
            }
        }

        return false;
    }

This operates in a second thread inside a recursive method. As you can see we are not dealing with T = T + 1 here.

At any rate, businesses need methods like COCOMO to estimate software costs, so regardless of your somewhat valid point that SLOC shouldn't matter, they actually kind of do. As long as you have competent programmers who don't try to inflate LOC, and certain other factors are equal (no one on the team has an IQ under 120, say) SLOC can be used to provide a rough estimate of cost. That is a fact, although you are free to argue with the establishment on that one. I'm just a messenger.
hero member
Activity: 728
Merit: 500
In cryptography we trust
usagi, your rebuttal is complete and utter nonsense. Are you aware of the KISS (keep it simple, stupid!) principle? Because the number of lines in your source code and your arguments indicate that you are incompetent in both programming and logical argumentation.
hero member
Activity: 532
Merit: 500
Kongzi beta-8 is over 10,000 SLOC. Not LOC, SLOC. You probably don't even understand what that means.
Seriously, I remember you didn't even know how to program a function in Excel. Now it sounds like you also don't know what you are doing as a programmer or are you counting all the blank lines in your code?

Emphasis mine.

SLOC and LOC is the same thing.

Stating the length of your source code in SLOC/LOC has no meaning whatsoever. SLOC/LOC can be artificially inflated by adding blank lines. You probably wanted to say LLOC (logical lines of code) but you didn't. LLOC is a much better metric to measure length of source code.

http://www.dwheeler.com/sloccount/sloccount.html

Basic Concepts

SLOCCount counts physical SLOC, also called "non-blank, non-comment lines". More formally, physical SLOC is defined as follows: ``a physical source line of code (SLOC) is a line ending in a newline or end-of-file marker, and which contains at least one non-whitespace non-comment character.'' Comment delimiters (characters other than newlines starting and ending a comment) are considered comment characters. Data lines only including whitespace (e.g., lines with only tabs and spaces in multiline strings) are not included.



Second, http://en.wikipedia.org/wiki/COCOMO -- You ARE familiar with COCOMO aren't you?

COCOMO was first published in Boehm's 1981 book Software Engineering Economics as a model for estimating effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRW Aerospace where Boehm was Director of Software Research and Technology. The study examined projects ranging in size from 2,000 to 100,000 lines of code, and programming languages ranging from assembly to PL/I. ... In 1995 COCOMO II was developed and finally published in 2000 in the book Software Cost Estimation with COCOMO II. COCOMO II is the successor of COCOMO 81 and is better suited for estimating modern software development projects. It provides more support for modern software development processes and an updated project database. The need for the new model came as software development technology moved from mainframe and overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components.

But then your statement is still useless if you do not mention which programming language was used. For example 10,000 LLOC in PHP is different from 10,000 LLOC in C/C++ and totally different from 10,000 LLOC in Assembler.

If you understood this and many other known arguments against these metrics you wouldn't use it to try to show off.

Oh dear.

SLOCCount can handle many different programming languages, and separate them by type (so you can compare the use of each). Here is the set of languages, sorted alphabetically; common filename extensions are in parentheses, with SLOCCount's ``standard name'' for the language listed in brackets:

    Ada (.ada, .ads, .adb, .pad) [ada]
    Assembly for many machines and assemblers (.s, .S, .asm) [asm]
    awk (.awk) [awk]
    Bourne shell and relatives such as bash, ksh, zsh, and pdksh (.sh) [sh]
    C (.c, .pc, .ec, .ecp) [ansic]
    C++ (.C, .cpp, .cxx, .cc, .pcc) [cpp]
    C# (.cs) [cs]
    C shell including tcsh (.csh) [csh]
    COBOL (.cob, .cbl, .COB, .CBL) [cobol]
    Expect (.exp) [exp]
    Fortran 77 (.f, .f77, .F, .F77) [fortran]
    Fortran 90 (.f90, .F90) [f90]
    Haskell (.hs, .lhs) [haskell]; deals with both types of literate files.
    Java (.java) [java]
    lex (.l) [lex]
    LISP including Scheme (.cl, .el, .scm, .lsp, .jl) [lisp]
    makefiles (makefile) [makefile]
    ML (.ml, .ml3) [ml]
    Modula3 (.m3, .mg, .i3, .ig) [modula3]
    Objective-C (.m) [objc]
    Pascal (.p, .pas) [pascal]
    Perl (.pl, .pm, .perl) [perl]
    PHP (.php, .php[3456], .inc)
[php]
    Python (.py) [python]
    Ruby (.rb) [ruby]
    sed (.sed) [sed]
    sql (.sql) [sql]
    TCL (.tcl, .tk, .itk) [tcl]
    Yacc (.y) [yacc] [/i]

No offense deeplink.. but you're just plain wrong, about so many things.... You need to stop and listen once in a while. You might learn something.

SLOC is a pretty meaningless measure.

Consider the following two (pseudo)code samples:

Sample 1:

X=Y*P/100;


Sample 2:

T=P;
T=T/100;
X=Y;
X=Y*T;


Both do exactly the same thing (set X to equalling P% of Y).  Under pretty much any measure of (physical or logical) SLOC sample 2 has 4 times the count of sample 1.  Is it really 4 times as much effort, 4 times as good or does it represent 4 * as much of ANY useful measure?

No.  In fact sample 2 is worse than sample 1 for at least two reason (three if P/100 will never be reused elsewhere).  THAT is why SLOC is useless as measure of what "value" the code has.  If the intent of SLOC is to reward effort then it's meaningless without knowing what portion of the code was auto-generated (e.g. by placing widgets in some IDEs or by using YACC/LEX to generate parser code etc etc).

If, of course, quoting SLOC was just some pathetic attempt to grow your e-peen then well done!  You wrote 10k lines of code (which could represent anywhere from a day to a few months work).  Of course we don't know if that's really GOOD code - or if it's something a better programmer could have done in 1k lines of code (SLOC counts are higher for bad programmers than for good ones for the same functionality).  It could be the best 10k lines of code ever written - or it could be 10k lines of bug-ridden junk.  It's a meaningless figure - other than to demonstrate that you've put a bit of work into your project.

If you want to brag, boast about what your code can do - not baout how many lines of typing you had to do to make it perform.  That way it at least has some meaning.  Just so you know, I've written (and documented, maintained and given training in) software with an SLOC an
order of magnitude larger than what you're claiming.  Strangely I never felt the need to discuss SLOC with clients - they seemed far more interested in what the software actually did.

(This paragraph is pure opinion).  I view SLOC as a pretty useless means of ameasuring anything worthwhile.  I expect it was devised by managers with insufficent knowledge to more properly assess the output of programming staff.  As a measure (if reward is based on it) it actually encourages bloated, inefficient code.  If you wrote something with an SLOC of 10k and I wrote something that did the same with an SLOC of 1K I'd very firmly believe I had more bragging rights (though if I were going to brag it wouldn't be about the lower SLOC it would be about the lower memory usage/speed/easier maintenance due to less code etc).

So - do you believe my sample 2 is better than my sample 1?  If not - how do we know your 10k SLOC isn't a crappy sample 2 of what should be a 2.5k sample 1?  And if you can't answer that then why 'brag' about your SLOC in the first place?

Oh - and quoting some website for a specific application as being the definition of a concept isn't exactly legitimate.  If you want a definition of SLOC then why not use the IEEE one (or even the SEI one)?  Fact is there's no universally accepted rule on whether blank lines (or comments) count.  As it happens I agree with Wheeler (and yourself it seems) that they shouldn't - but there's no standard definition for it and tbh I don't see to much point in wasting effort on the definition of an essentially meaningless metric.
hero member
Activity: 952
Merit: 1009
From what I read this is just pseudo-intellectualese for "Wanna learn language x? Read stuff, watch movies, talk!" which is what they advised us to do in any of the four languages I learned thus far and should be pretty much common sense.

It doesn't work.

Want to learn language X? Go to country X, pick up a bf or gf, stay there for a few months. Presto.

It helps if you don't take any money with you.

That one works nicely, yup.
hero member
Activity: 756
Merit: 522
From what I read this is just pseudo-intellectualese for "Wanna learn language x? Read stuff, watch movies, talk!" which is what they advised us to do in any of the four languages I learned thus far and should be pretty much common sense.

It doesn't work.

Want to learn language X? Go to country X, pick up a bf or gf, stay there for a few months. Presto.

It helps if you don't take any money with you.
vip
Activity: 812
Merit: 1000
13
Kongzi beta-8 is over 10,000 SLOC. Not LOC, SLOC. You probably don't even understand what that means.
Seriously, I remember you didn't even know how to program a function in Excel. Now it sounds like you also don't know what you are doing as a programmer or are you counting all the blank lines in your code?

Emphasis mine.

SLOC and LOC is the same thing.

Stating the length of your source code in SLOC/LOC has no meaning whatsoever. SLOC/LOC can be artificially inflated by adding blank lines. You probably wanted to say LLOC (logical lines of code) but you didn't. LLOC is a much better metric to measure length of source code.

http://www.dwheeler.com/sloccount/sloccount.html

Basic Concepts

SLOCCount counts physical SLOC, also called "non-blank, non-comment lines". More formally, physical SLOC is defined as follows: ``a physical source line of code (SLOC) is a line ending in a newline or end-of-file marker, and which contains at least one non-whitespace non-comment character.'' Comment delimiters (characters other than newlines starting and ending a comment) are considered comment characters. Data lines only including whitespace (e.g., lines with only tabs and spaces in multiline strings) are not included.



Second, http://en.wikipedia.org/wiki/COCOMO -- You ARE familiar with COCOMO aren't you?

COCOMO was first published in Boehm's 1981 book Software Engineering Economics as a model for estimating effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRW Aerospace where Boehm was Director of Software Research and Technology. The study examined projects ranging in size from 2,000 to 100,000 lines of code, and programming languages ranging from assembly to PL/I. ... In 1995 COCOMO II was developed and finally published in 2000 in the book Software Cost Estimation with COCOMO II. COCOMO II is the successor of COCOMO 81 and is better suited for estimating modern software development projects. It provides more support for modern software development processes and an updated project database. The need for the new model came as software development technology moved from mainframe and overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components.

But then your statement is still useless if you do not mention which programming language was used. For example 10,000 LLOC in PHP is different from 10,000 LLOC in C/C++ and totally different from 10,000 LLOC in Assembler.

If you understood this and many other known arguments against these metrics you wouldn't use it to try to show off.

Oh dear.

SLOCCount can handle many different programming languages, and separate them by type (so you can compare the use of each). Here is the set of languages, sorted alphabetically; common filename extensions are in parentheses, with SLOCCount's ``standard name'' for the language listed in brackets:

    Ada (.ada, .ads, .adb, .pad) [ada]
    Assembly for many machines and assemblers (.s, .S, .asm) [asm]
    awk (.awk) [awk]
    Bourne shell and relatives such as bash, ksh, zsh, and pdksh (.sh) [sh]
    C (.c, .pc, .ec, .ecp) [ansic]
    C++ (.C, .cpp, .cxx, .cc, .pcc) [cpp]
    C# (.cs) [cs]
    C shell including tcsh (.csh) [csh]
    COBOL (.cob, .cbl, .COB, .CBL) [cobol]
    Expect (.exp) [exp]
    Fortran 77 (.f, .f77, .F, .F77) [fortran]
    Fortran 90 (.f90, .F90) [f90]
    Haskell (.hs, .lhs) [haskell]; deals with both types of literate files.
    Java (.java) [java]
    lex (.l) [lex]
    LISP including Scheme (.cl, .el, .scm, .lsp, .jl) [lisp]
    makefiles (makefile) [makefile]
    ML (.ml, .ml3) [ml]
    Modula3 (.m3, .mg, .i3, .ig) [modula3]
    Objective-C (.m) [objc]
    Pascal (.p, .pas) [pascal]
    Perl (.pl, .pm, .perl) [perl]
    PHP (.php, .php[3456], .inc)
[php]
    Python (.py) [python]
    Ruby (.rb) [ruby]
    sed (.sed) [sed]
    sql (.sql) [sql]
    TCL (.tcl, .tk, .itk) [tcl]
    Yacc (.y) [yacc] [/i]

No offense deeplink.. but you're just plain wrong, about so many things.... You need to stop and listen once in a while. You might learn something.
hero member
Activity: 728
Merit: 500
In cryptography we trust
Kongzi beta-8 is over 10,000 SLOC. Not LOC, SLOC. You probably don't even understand what that means.
Seriously, I remember you didn't even know how to program a function in Excel. Now it sounds like you also don't know what you are doing as a programmer or are you counting all the blank lines in your code?

Emphasis mine.

SLOC and LOC is the same thing.

Stating the length of your source code in SLOC/LOC has no meaning whatsoever. SLOC/LOC can be artificially inflated by adding blank lines. You probably wanted to say LLOC (logical lines of code) but you didn't. LLOC is a much better metric to measure length of source code. But then your statement is still useless if you do not mention which programming language was used. For example 10,000 LLOC in PHP is different from 10,000 LLOC in C/C++ and totally different from 10,000 LLOC in Assembler.

If you understood this and many other known arguments against these metrics you wouldn't use it to try to show off.
vip
Activity: 812
Merit: 1000
13
From what I read this is just pseudo-intellectualese for "Wanna learn language x? Read stuff, watch movies, talk!" which is what they advised us to do in any of the four languages I learned thus far and should be pretty much common sense.

Sure, for someone that knows four languages or is familiar with the professional literature. Granted. But that's the problem-- for most people it isn't common sense. Language courses are not taught this way in American colleges and universities. It is not taught this way in Cram schools in Asia either.

That's kind of why I am doing this; i'd like modern language education to catch up with the research.
hero member
Activity: 952
Merit: 1009
From what I read this is just pseudo-intellectualese for "Wanna learn language x? Read stuff, watch movies, talk!" which is what they advised us to do in any of the four languages I learned thus far and should be pretty much common sense.
vip
Activity: 812
Merit: 1000
13
This is hopefully not what you intend to end up in your book. You should employ an editor or at least invest in a spell checker or something.

I am not at all concerned with the spelling or grammar of the introduction at this point. As I said the book isn't done yet.

What do you think of the implications of the research? Anyone who has been through a first year language course should have a 300+ word vocabulary. The difference between what my research shows and what is normally done is that with my method an extremely large amount of interesting reading material can be created If you are familiar with linguistics at all and concepts like FVR (free voluntary reading) you will realize just how important and interesting what I have done here is, and how valuable what I am attempting to do will be.
hero member
Activity: 952
Merit: 1009
This is hopefully not what you intend to end up in your book. You should employ an editor or at least invest in a spell checker or something.
vip
Activity: 812
Merit: 1000
13
Kongzi beta-8 is over 10,000 SLOC. Not LOC, SLOC. You probably don't even understand what that means.
Seriously, I remember you didn't even know how to program a function in Excel. Now it sounds like you also don't know what you are doing as a programmer or are you counting all the blank lines in your code?

Emphasis mine.
vip
Activity: 812
Merit: 1000
13
I have been interested in learning Japanese for a while but I fail to see how your method differs from purchasing a Japanese language book, listening to audio such as from Pimsleur, and going on italki.com and practicing with native Japanese speakers.

My goal is to use acquisition theory and massive comprehensible input to teach Japanese (and Chinese). Please look up some of Dr. Stephen Krashen's lectures on Youtube -- it really is quite fascinating.

Will you post your CV to back up your teaching qualifications?

Also, could you provide references of past students or maybe post a letter of recommendation from them?  I have written recommendation letters for my past language teachers.

I don't think you understand what I am offering. My book will be interesting on it's own, but kongzi.ca is just software akin to anki or stackz or iknow.jp. Most of the content will be created and maintained by native Japanese (or Chinese, for Chinese) speakers. With acquisition theory and MCI it wouldn't matter how good my japanese was because one person alone cannot create the absolutely massive amounts of content you need for the method to work.

About the book tho, if you don't think that Acquisition theory is of any use, don't buy my book. Again it matters little how good my japanese is; the only thing that matters is that you are presented with a large (large!) amount of understandable source material. Over time I'll probably release quite a bit more of it for free on the internet. You will have plenty of time to read more about it yourself and decide if you are interested in it before it is finished and I start really charging for it.

I guess you could say I am actually more interested in the science behind it than the business end. Here's a snippet from "Welcome to Chinese" (the chinese book I am working on). I have 22 chapters (over 400 words) done on it:

Targeted MCI: Why Frequency Order?

One approach is to study characters by frequency. Consider that the most common 180 characters comprise approximately 50% of all written Chinese.  *1 ("6000 Chinese Words”, by James Erwin Dew, pg. 33). If a student learned those 180 characters, he would have a lot more confidence in his progress because every single time he saw a Chinese book, sign, or newspaper, he would likely recognize some if not all of it.

This is a very good idea on it’s own, but cannot make a useable textbook. For one, it is not possible to construct any meaningful dialogue whatsoever out of the first 100 frequency ordered characters out of all written Chinese. Nor the first 200. The reason is that many vital words are from parts of speech classes, that while independently are very low frequency words, together make up a large portion of the language. For example, most nouns may be frequency 1000 or lower; but withot a decent number of nouns, one can’t really speak a language. Even if a student sticks it out and achieves a 90% comprehension rate by memorizing the first 1,000 characters, this will not be enough to actually understand anything since he or she will not understand core areas of the language (such as nouns). Attempting to read anything would be a laborious exercise in dictionary usage.

The problem can be absolved by reducing the sample size, lowering the target vocabulary. Trying to tackle a frequency ordered approach containing 5,000 characters for 99.9% literacy is not a viable option. By targeting the vocabulary to a restricted set of materials which are then used as a stepping stone to reach the next level of language usage, students can experience immediate usability and fluency in the language. This provides a strong reinforcement for learning the language.


Benefits of the Targeted Frequency Order Approach

The targeted frequency order approach used in this book consists of a short frequency analysis of a small to medium sized pool of low-level materials such as children’s books, contrasted with government approved grading systems such as BLI, HSK, CYY and IUP.

The results of the research into the construction of the core vocabulary was very suprising. Following is a small chart which rates literacy of the target material compared to an untargeted vocabulary of several thousand words, based on how many characters have been learned under either approach:
 
# of Words        Literacy(Untargeted Vocabulary)   Literacy(Targeted Vocabulary)
100 37% 70%
200 46% 80%
300 52% 86%
400 56% 92%
500 59% 95%

With untargeted frequency analysis, the first 100 characters learned comprise under 40% of the written language. But with the targeted approach we used it comprised an amazing 70%. As a result, the student would theoretically feels a much greater sense of progress than normal. It is hoped this would fuel their desire to learn as they would feel their fluency increasing measurably with every lesson learned.

As a reassurance, since all of the characters surveyed in this approach would naturally fall within the most common few thousand characters used in modern Chinese, a student isn’t wasting their time studying this method over any other, and could quite easily transition to a normal textbook or Chinse course if it became necessary.


Results of Core Vocabulary Selection Process
 
The approach used in writing this book was to conduct a character frequency analysis of a popular children’s book series. The intent was to create a subset of the most common characters which not only were common, but were able to be used to construct meaningful material. Hopefully, the student would be immediately able to read much longer stories and dialogues than appear in normal textbooks. It is also hoped that the charming, timeless stories would appeal to people of all ages.

As the results of the analysis came in, everyone doing the research was shocked. Here is a snapshot of the pool analysis data for the first 12 books:
 
Book   New Characters   New to Pool   Pool
34-1      137   100%   137
34-2      66   48%   203
34-3      52   26%   255
34-4      42   16%   297
34-5      50   17%   347
34-6      39   11%   386
34-7      42   11%   428
34-8      40   9%   468
34-9      22   5%   490
34-10      0   0%   490
34-11      19   4%   509
34-12      17   3%   526

There are several ways to interpret the data. First, that given the results of the last four books (and especially book 10) we see that with a small, well-tuned vocabulary it is possible to create a large amount of different and interesting reading material. Most notably book 10, which was written entirely using characters which appeared previously in the series. Based on this data, it seems plausable that a core vocabulary could be constructed that would be even smaller, which would support the transition to the MCI approach within the first year of language instruction.


Designing an Optimal Core Vocabulary

There are several considerations to designing an optimal core vocabulary. First is the target size. If we aim for first year students this should be around 250 or 300 words. The set of 527 words from the books listed above is small enough to be analyzed for this purpose. Let's examine the most common characters in this list first.

Appearances   Pool Size   Literacy (to 1%)
> 1   526   100%
> 2   386   95%
> 3   273   90%
> 4   221   85%
> 5   186   80%

The above chart suggests that the core vocabulary does not need to use characters which only appear once or twice in the entire series of 12 books; a 90% literacy rate (over the target vocabulary) can be achieved with just 273 characters. Given the results of the previous table, it is proposed that a minimum of three or perhaps four readers could be constructed based on this material.

A further method is to examine the contents of the stories themselves. Of the most common characters, throughout the 20th to 60th most common chartacters appear many words used to refer to particular animals. This is due to the subject nature of the books (children's story books). Additionally, a large number of objects and places and descriptions vary from book to book. These words will be reasonably common as they appear a large number of times, however, they only appear in one or perhaps two books in the series. We may therefore hypothesize that we can reduce the number of characters in the remaining target vocabulary by 20 to 30 words by introducing a consistent cast of characters and a consistent set/scenery.
hero member
Activity: 728
Merit: 500
In cryptography we trust
Cool story bro. Let me tell you about all my scams -- like actually publishing a Japanese Texbook. And actually coding a real online language school.

Kongzi beta-8 is over 10,000 SLOC. Not LOC, SLOC. You probably don't even understand what that means. I'll let someone like Diablo explain it to you. Anyone here care to explain what being able to code and maintain a 10k SLOC project is like?

I bet if I explained to you some of the natural language processing problems I faced writing this software, that you would be mentally incapable of understanding the solutions. Your troll attempt is a joke. I've seen better trolls on newsgroups about cats.

Maybe you can also ask Diablo to explain to you how to loose 97% of your investors money? You two are a great team.

Seriously, I remember you didn't even know how to program a function in Excel. Now it sounds like you also don't know what you are doing as a programmer or are you counting all the blank lines in your code?

Would you like to learn Javascript? It's really easy! I can teach you right here.
Pages:
Jump to: