Pages:
Author

Topic: [ANN][ARGUS]|ARGUS|| ACCOUNTING AND AUDITING PLATFORM ON THE BLOCKCHAIN | - page 10. (Read 43110 times)

sr. member
Activity: 770
Merit: 251
live the dream but don't live the dream
hello dev
I've been working on the Bounty
check PM

My WALLET :  7B8c53tBQYPiCJasVHe6DuG2cM7f3WHLQN
newbie
Activity: 42
Merit: 0
look like ARGUS breaking 20k sat today , and going to a new high
hero member
Activity: 2128
Merit: 757
NO WAR ! Glory to Ukraine !
Yes the coins would be swapped on 20th April .
Yes, thanks very much !
Well, we will wait  20th April Smiley
I thought..one and a half month.. Why so long ?
full member
Activity: 154
Merit: 100
Argus should and needs to be added to some major exchanges
If some one has contacts , message them to list Argus . It will help investors
and about the Thread question POW > POS OR POW+POS > ICO

ICO = SCAM
full member
Activity: 238
Merit: 100
|Argus| Accounting and Auditing on the Blockchain
We would be changing the account password and email id so that we can share our moderating work on bitcointalk thread and answer everyone
at a good speed .
newbie
Activity: 42
Merit: 0
time to buy some ARGUS. lets MOON
member
Activity: 70
Merit: 10
OK. my brothers.
Hear respect
hero member
Activity: 2128
Merit: 757
NO WAR ! Glory to Ukraine !
Yes the coins would be swapped on 20th April .
Yes, thanks very much !
Well, we will wait  20th April Smiley
newbie
Activity: 42
Merit: 0
Merhaba Sekkenno
Nasılsın kardeşim ? Merak etme, hepsi iyi, bir süre beklersek madalyon pompalayacağız ve iyi kazançlar için satacağız
newbie
Activity: 42
Merit: 0
I also need my old argus but I think I can wait till then one argus would be 30,000 sats again  Cool
member
Activity: 70
Merit: 10
We have already mentioned that we are following a proper procedure to safe guard the interests of everyone .


Efendim yatırımlardan bahsediyoruz.
peki ya o zamana kadar fiyat dibe düşerse ne olacak.
kazanmak için argusa yatırım yaptım kaybetmek için değil.
member
Activity: 70
Merit: 10
Yes the coins would be swapped on 20th April .

No.
We do not accept.
I made all my money investments in argus.
We want our rights.

Otherwise we have to complain.

Because we are not guilty.
If we are not guilty
Why are we victims.
full member
Activity: 238
Merit: 100
|Argus| Accounting and Auditing on the Blockchain
We have already mentioned that we are following a proper procedure to safe guard the interests of everyone .
member
Activity: 70
Merit: 10
Yes the coins would be swapped on 20th April .

No.
We do not accept.
I made all my money investments in argus.
We want our rights.

Otherwise we have to complain.
full member
Activity: 238
Merit: 100
|Argus| Accounting and Auditing on the Blockchain
Old Argus Coins would be destroyed and Holders of old Argus will get the new argus coins in their address which they have provided to the team .

member
Activity: 70
Merit: 10
Yes the coins would be swapped on 20th April .

No.
We do not accept.
I made all my money investments in argus.
We want our rights.
member
Activity: 70
Merit: 10
UPDATE 1.0.2
Coin swap would happen on 20th April
Yobit and Liqui Bounties to close on 5th March

BestRegards
Victor
ArgusTeam






how so.
What will happen on April 20th?
When will Argus be sent to us?
Please give a clear answer to this
full member
Activity: 238
Merit: 100
|Argus| Accounting and Auditing on the Blockchain
Yes the coins would be swapped on 20th April .
hero member
Activity: 2128
Merit: 757
NO WAR ! Glory to Ukraine !
UPDATE 1.0.2
Coin swap would happen on 20th April
Yobit and Liqui Bounties to close on 5th March

BestRegards
Victor
ArgusTeam

Please tell, what about those people who have old coins remain on the coinexchange.io ?
full member
Activity: 238
Merit: 100
|Argus| Accounting and Auditing on the Blockchain
ARGUS DEV UPDATE 1.0.1


ARGUS DATABASE CLIENT

Using network flow data for network operations, performance and security mangaement is a large data problem, in that we're talking about collecting, processing and storing a large amount of data. Modern Blockchain databases management (MBDM) technology works very well to assist in processing and mining all that data. Some thought and engineering, however, needs to be done to get the most benefit.
When using MBDM technology to support flow data auditing and analysis, the primary issue is database performance and data arrival rates. Can the RDBM keep up, inserting the flow data from the sensors and also serve queries against the data? For most companies, corporations and some universities and colleges, using an MBDL like MySQL to manage and process all the primitive data from their principal argus sensors (or netflow data sources) works well. Many database systems running on contemporary PC technology (dual/quad core, 2+GHz Intel, 8-16GB memory, 1+TB disk) can handle around 500-1000 argus record insertions per second, which is around 50-85M flows per day or around 70GB flow data/day), and have plenty of cycles to handle many standard forensic queries to the data, like "show me all the activity from this network address in the last month".
For larger sites, such as member universities of Internet2, where the flow record demand can get into the 20-50K flows per second range, there are databases that can keep up.

Code:
ra -r netflow.file

When reading from the network, argus clients are normally expecting Argus records, so we have to tell the ra* program that the data source and format are Netflow, what port, and optionally, what interface to listen on. This is currently done using the "
Code:
-C [host:]port" option.

Code:
ra -C 9996

If the machine ra* is running on has multiple interfaces, you may need to provide the IP address of the interface you want to listen on. This address should be the same as that used by the Netflow exporter.

Code:
ra -C 192.168.0.68:9996



Code:
thoth:tmp carter$ ra -r /tmp/ra.netflow.out    StartTime  Proto      SrcAddr  Sport   Dir      DstAddr  Dport SrcPkt DstPkt SrcBytes DstBytes
12:34:31.658    udp 192.168.0.67.61251     ->  192.168.0.1.snmp        1      0       74        0
12:34:31.718    udp 192.168.0.67.61252     ->  192.168.0.1.snmp        1      0       74        0
12:35:31.848    udp 192.168.0.67.61253     ->  192.168.0.1.snmp       10      0      796        0
12:35:31.938    udp 192.168.0.67.61254     ->  192.168.0.1.snmp        1      0       74        0
12:35:31.941    udp  192.168.0.1.snmp      -> 192.168.0.67.61254       1      0       78        0
12:35:31.851    udp  192.168.0.1.snmp      -> 192.168.0.67.61253      10      0      861        0

thoth:tmp carter$ racluster -r /tmp/ra.netflow.out    StartTime  Proto      SrcAddr  Sport   Dir      DstAddr  Dport SrcPkt DstPkt SrcBytes DstBytes
12:34:31.658    udp 192.168.0.67.61251     ->  192.168.0.1.snmp        1      0       74        0
12:34:31.718    udp 192.168.0.67.61252     ->  192.168.0.1.snmp        1      0       74        0
12:35:31.848    udp 192.168.0.67.61253     ->  192.168.0.1.snmp       10     10      796      861
12:35:31.938    udp 192.168.0.67.61254     ->  192.168.0.1.snmp        1      1       74       78

Using a database for handling argus data provides some interesting solutions to some interesting problems. racluster() has been limited in how many unique flows it can process, because of RAM limitations. rasqlinsert() can solve this problem, as it can do the aggregation of racluster(), but use a MySQL table as the backing store, rather than memory. Programs like rasort() which read in all the argus data, use qsort() to sort the records, and then output the records as a stream, have scaling issues, in that you need to have enough memory to hold the binary records.

Pages:
Jump to: