Author

Topic: My journey from user, to cyborg, to maybe coding a bot. (Read 1091 times)

legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Just a bit outside my zone of proximal knowledge.  Smiley I might fake it by saying is that a bash script?
Correct. I like bash, I can do things without knowing what I'm doing Tongue
The grep-part gets all posts with some surrounding lines, then head/tail gets the one I need. It can probably be done more efficiently, but it works.

Quote
I did just watch a video about the origin of grep and Ken Thompson from Computerphile.   Thank you though, I will look into that code snippet. EDIT: https://www.youtube.com/watch?v=NTfOnGZUZDk
I never bothered to check it's history, even though it's one my most used commands.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Why are you asking for a username in the first place? Each post already contains a username..
Because I'm banging rocks here. Smiley  Like, I don't really know how to find the bet, and then walk backwards in the source to find the username that posted the bet.  Or maybe I do, now that I put it like that. Smiley  The quoted bet is still tough, there needs to be some sort of identifier of what day the bet is for.  Certainly doens't have to be a full date though.
This is what I do (for scraping recent):
Code:
arraypostrange[$j]="`cat recent.html | grep --text --no-group-separator -B 11 '
I use this inside a loop for each post. The array contains everything related to one post, so I use that to get the username with each post.

Quotes are more fun annoying indeed, to exclude them I count quotedepth, divdepth and divwithinquote. Let's just say it's messy Tongue
Heh.  Just a bit outside my zone of proximal knowledge.  Smiley I might fake it by saying is that a bash script?  I did just watch a video about the origin of grep and Ken Thompson from Computerphile.   Thank you though, I will look into that code snippet. EDIT: https://www.youtube.com/watch?v=NTfOnGZUZDk

TESTing

100k,Login,1,200,2024-06-15
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Why are you asking for a username in the first place? Each post already contains a username..
Because I'm banging rocks here. Smiley  Like, I don't really know how to find the bet, and then walk backwards in the source to find the username that posted the bet.  Or maybe I do, now that I put it like that. Smiley  The quoted bet is still tough, there needs to be some sort of identifier of what day the bet is for.  Certainly doens't have to be a full date though.
This is what I do (for scraping recent):
Code:
arraypostrange[$j]="`cat recent.html | grep --text --no-group-separator -B 11 '
I use this inside a loop for each post. The array contains everything related to one post, so I use that to get the username with each post.

Quotes are more fun annoying indeed, to exclude them I count quotedepth, divdepth and divwithinquote. Let's just say it's messy Tongue
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Cheesy   That was a lot of fun.  I knew satoshi was going to pass, and maybe it's funnier this way.  I saw your post, LoyceV, with the 1,000 bets and was so pleased the code was prepared for that.
Then how did I go from 46137 to 17 CBBits?
Because I reran the wrong code with the correct bitcoin price data, I wrongly though the price went up again.  So, then instead of winning multiple times, you kept losing in a row until the bets were void because you didn't have the necessary funds.  I thought that was the only fair thing to do.
If that's the case, then why does the user need to enter the date? Just a post should be enough.
The fact that a date was required made me think I could "plan ahead".
Because my only tools are page source and regex searches, and to avoid registering old bets that were quoted, a date was helpful for people to include.  I was thinking of how to store all future bets and pull them when the date matched.
Why are you asking for a username in the first place? Each post already contains a username..
Because I'm banging rocks here. Smiley  Like, I don't really know how to find the bet, and then walk backwards in the source to find the username that posted the bet.  Or maybe I do, now that I put it like that. Smiley  The quoted bet is still tough, there needs to be some sort of identifier of what day the bet is for.  Certainly doens't have to be a full date though.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Cheesy   That was a lot of fun.  I knew satoshi was going to pass, and maybe it's funnier this way.  I saw your post, LoyceV, with the 1,000 bets and was so pleased the code was prepared for that.
Then how did I go from 46137 to 17 CBBits?

Quote
Code:
if date == today:
If that's the case, then why does the user need to enter the date? Just a post should be enough.
The fact that a date was required made me think I could "plan ahead".

Quote
I'm going to think about how to solve the 'satoshi' problem with my limited abilities.
Why are you asking for a username in the first place? Each post already contains a username.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
 Cheesy   That was a lot of fun.  I knew satoshi was going to pass, and maybe it's funnier this way.  I saw your post, LoyceV, with the 1,000 bets and was so pleased the code was prepared for that. 
Code:
if date == today:
The only thing I'm dissappinted, besides typing in the wrong price [I couldn't get the easyocr to work on the laptop], is how earlier I did give up trying to learn github.  I had the right code on the wrong computer.  I just ran the right code with the wrong prices, and you only got one winner and 999 "Not today!" messages.  Smiley  Moving on...I'm going to think about how to solve the 'satoshi' problem with my limited abilities.  That and GitHub.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
This was the project I've come closest to quitting.  Everything was fairly simple until trying to assign each bet with the UTC hour it was made.  I needed that because the earlier in the day the prediciton of up or down happens, the payout amount is greater.  I was first going to run the script all day, with 1 hour sleep commands inbetween.  But I still didn't know when to stop loading the next page, to search for new bets.   I broke off the job of hourly scraping for bets, and checking for winners and losers, into two separate scripts.  I still couldn't figure out how to tell if you were in the most recent page of a thread.

First, I tried to use NinjasticSpace to isolate the comments by hours, but the page source was in javascript.  So my regex searches were useless.  But then I figured it out.  If the newest "page" in a thread is 40 for instance.  Then loading with "page = 60" works, but if one checks the page source, it still references "page = 40".  
So if the page loaded and the page source doesn't match you know you have reached the end of the line.  Smiley  So stop looking for new bets, save the results, quit the script, and start again at the next 55th minute of the hour, do it all over again starting from the previous last page.  I decided to only use json files to store data, with no csv this project.

Here is the hourly bet finder script
Code:
import datetime, json, os, re, requests, time
from datetime import timezone, timedelta

utc_hour = datetime.datetime.now(datetime.UTC).hour

# set UTC dates for links and new folder, so 'today' is around 30 minutes old when this is run
today = datetime.datetime.now(timezone.utc)
yesterday = today - timedelta(1)
print(today)

laugh_out = ["User Name", "username", "UserName"]

json_filename = "C:/PyProjects/CBGame/latest_cbdata.json"
with open(json_filename, "r") as jsonfile:
    latest_data = json.load(jsonfile)
    for username, data in latest_data.items():
        username, cb_merits = data

json_file = "C:/PyProjects/CBGame/latest_page.json"
with open(json_file, "r") as jsonfile:
    latest_data = json.load(jsonfile)

latest_page_number = latest_data.get("latest_page_number", [700])[0]
print(latest_page_number)


all_page_sources = []
while True:
    url = f"https://bitcointalk.org/index.php?topic=178336.{latest_page_number}"
    time.sleep(5)
    response = requests.get(url)
    filetext = response.text
    # pattern = fr"mirrortab_back'>\today = datetime.datetime.now(timezone.utc)
yesterday = today - timedelta(1)
print(today)

updown = []
evenodd = []
exact = []        

print(updown)
print(evenodd)
print(exact)

# reader = easyocr.Reader(['en'], gpu = False)
# results = reader.readtext(f"C:/PyProjects/CB/images/download (01).png", detail = 0)
today_price = 61440 # int(results[0])
yesterday_price = 61440
print(today_price)
delta = today_price - yesterday_price
print(delta)

json_filename1 = "C:/PyProjects/CBGame/daily_updown.json"
with open(json_filename1, "r") as jsonfile1:
    latest_updown = json.load(jsonfile1)  
    updown.extend(latest_updown)
for ticket in updown:
    username, bet_type, bet, wager, utcdate, utch = ticket
    json_filename = "C:/PyProjects/CBGame/latest_cbdata.json"
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
    _, cb_merits = latest_data.get(username, (None, None))
    wager_int = int(wager)
    if bet == "up" and delta >= 1 or bet == "down" and delta <= -1:
        if wager_int <= latest_data[username][1]:
            payout_ratio = (1 + ((24-utch)/24))
            latest_data[username][1] += math.floor(payout_ratio*wager_int)
            print(f"{username} won {math.floor(payout_ratio*wager_int)}, now has {latest_data[username][1]}")
        else:
            print(f"{username} bet too much!")
    else:
        if wager_int <= latest_data[username][1]:
            latest_data[username][1] -= wager_int
            print(f"{username} lost {wager}")
        else:
            print(f"{username} bet too much!")
    with open(json_filename, "w") as jsonfile:
        json.dump(latest_data, jsonfile)

json_filename2 = "C:/PyProjects/CBGame/daily_evenodd.json"
with open(json_filename2, "r") as jsonfile2:
    latest_evenodd = json.load(jsonfile2)  
    evenodd.extend(latest_evenodd)
for ticket in evenodd:
    json_filename = "C:/PyProjects/CBGame/latest_cbdata.json"
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
        for username, data in latest_data.items():
            username, cb_merits = data
        username, bet_type, bet, wager, utcdate, utch = ticket
        wager_int = int(wager)
        if bet == "even" and today_price % 2 == 0 or bet == "odd" and today_price % 2 != 0:
            if wager_int <= latest_data[username][1]:
                latest_data[username][1] += wager_int
                print(f"{username} won {wager}, now has {latest_data[username][1]}")
            else:
                print(f"{username} bet too much!")
        else:
            if wager_int <= latest_data[username][1]:
                latest_data[username][1] -= wager_int
                print(f"{username} lost {wager}")
            else:
                print(f"{username} bet too much!")
    with open(json_filename, "w") as jsonfile:
        json.dump(latest_data, jsonfile)

json_filename3 = "C:/PyProjects/CBGame/daily_exact.json"
with open(json_filename3, "r") as jsonfile3:
    latest_exact = json.load(jsonfile3)  
    exact.extend(latest_exact)

for ticket in exact:
    json_filename = "C:/PyProjects/CBGame/latest_cbdata.json"
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
        for username, data in latest_data.items():
            username, cb_merits = data
        username, bet_type, bet, wager, utcdate, utch = ticket
        payout_ratio = ((24-utch)/24)
        bet_int = int(bet)
        wager_int = int(wager)
        print(bet)
        
        if bet_int == today_price:
            if wager_int <= latest_data[username][1]:
                latest_data[username][1] += math.floor(100*payout_ratio*wager_int)
                print(f"{username} won {math.floor(100*payout_ratio*wager_int)}, now has {latest_data[username][1]}")
            else:
                print(f"{username} bet too much!")
        else:
            if wager_int <= latest_data[username][1]:
                latest_data[username][1] -= wager_int
                print(f"{username} lost {wager}")
            else:
                print(f"{username} bet too much!")
    with open(json_filename, "w") as jsonfile:
        json.dump(latest_data, jsonfile)

###########                            ☰☰☰        ▟█
# The End #                       ☰☰☰☰☰☰  ⚞▇▇▇██▛(°)>
###########                            ☰☰☰        ▜█
I'm going to manually put in the prices for a bit.

I almost forgot, 14/14 this week for ChartBuddy Daily Recap and Top20 Days Of Bitcoin posting.  Let's go!  I even changed the code mid week to highlight in different colors, and adding the betting results to the pyperclip.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
We've got another suggestion request for a WO thread gambling game.  Will the hourly price go up or down?   I expanded it todaily, and don't quite know how I'm going to assign what hour the bets were made,yet.  Here is the code so far.  Rather than ask someone or find the api, I figured we would use OCR to find the latest ChartBuddy Bid.  Smiley\
Code:
import csv, datetime, easyocr, json, math, os, pyautogui, pyperclip, random, re, requests, time, urllib.request, webbrowser
from datetime import timezone, timedelta, date
from os import rename
import pandas as pd
from tabulate import tabulate

utc_hour = datetime.datetime.now(datetime.UTC).hour
today = datetime.datetime.now(timezone.utc)
yesterday = today - timedelta(1)
print(today)

laugh_out = ["User Name", "username", "UserName"]

json_filename = "C:/PyProjects/CBGame/latest_cbdata.json"
with open(json_filename, "r") as jsonfile:
    latest_data = json.load(jsonfile)
    for username, data in latest_data.items():
        username, cb_merits = data

all_page_sources = []
i = 1480
url = f"https://bitcointalk.org/index.php?topic=5484350.{i}"
url2 = "https://pushups.free.nf/index.php?topic=37.0"
url3 = "https://bitcointalk.org/index.php?topic=5484132.40"
time.sleep(5)
response = requests.get(url3)
if response.status_code == 200:
    print(i)
    all_page_sources.append(response.text)

with open("C:/PyProjects/CBGameSource.txt", "w", encoding="UTF-8") as textfile:
    for source_code in all_page_sources:
        textfile.write(source_code)
with open("C:/PyProjects/CBGameSource.txt", "r", encoding="UTF-8") as textfile:
    filetext = textfile.read()

# format = CBGame,User Name,BetType,Bet,Wager BetTypes = updown, evenodd, exact
pattern = r"(?:ame),(\w[\s\w]+\w),(\w+),(\w+),(\d+)"
matches = re.findall(pattern, filetext)
print(matches)

updown = []
evenodd = []
exact = []

json_filename = "C:/PyProjects/CBGame/latest_cbdata.json"
with open(json_filename, "r") as jsonfile:
    latest_data = json.load(jsonfile)
    for match in matches:
        username, bet_type, bet, wager = match
        if username not in laugh_out:
            if username not in latest_data:
                latest_data[username] = (username, 1000)
with open(json_filename, "w") as jsonfile:
    json.dump(latest_data, jsonfile)
              
# Use list append directly on the appropriate list
for match in matches:
    username, bet_type, bet, wager = match
    if bet_type == "updown":
        updown.append(match + (utc_hour,))  # Add the entire match + utc_hour to the updown list
    if bet_type == "evenodd":
        evenodd.append(match + (utc_hour,))  # Add the entire match + utc_hour to the evenodd list
    if bet_type == "exact":
        try:
            bet_int = int(bet)
            exact.append(match + utc_hour)
        except ValueError:
            continue
        

print(updown)
print(evenodd)
print(exact)

reader = easyocr.Reader(['en'], gpu = False)
results = reader.readtext(f"C:/PyProjects/CB/images/download (01).png", detail = 0)
today_price = int(results[0])
yesterday_price = 60793
print(today_price)
delta = today_price - yesterday_price
print(delta)
for ticket in updown:
    json_filename = "C:/PyProjects/CBGame/latest_cbdata.json"
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
        for username, data in latest_data.items():
            username, cb_merits = data
        username, bet_type, bet, wager, utc_bet = ticket
        wager_int = int(wager)
        if bet == "higher" and delta >= 1 or bet == "lower" and delta <= -1:
            if wager_int <= latest_data[username][1]:
                payout_ratio = (1 + ((24-utc_bet)/24))
                latest_data[username][1] += math.floor(payout_ratio*wager_int)
                print(f"{username} won {math.floor(payout_ratio*wager_int)}, now has {latest_data[username][1]}")
            else:
                print(f"{username} bet too much!")
        else:
            if wager_int <= latest_data[username][1]:
                latest_data[username][1] -= wager_int
                print(f"{username} lost {wager}")
            else:
                print(f"{username} bet too much!")
    with open(json_filename, "w") as jsonfile:
        json.dump(latest_data, jsonfile)

for ticket in evenodd:
    json_filename = "C:/PyProjects/CBGame/latest_cbdata.json"
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
        for username, data in latest_data.items():
            username, cb_merits = data
        username, bet_type, bet, wager, utc_hour = ticket
        wager_int = int(wager)
        if bet == "even" and today_price % 2 == 0 or bet == "odd" and today_price % 2 != 0:
            if wager_int <= latest_data[username][1]:
                latest_data[username][1] += wager_int
                print(f"{username} won {wager}, now has {latest_data[username][1]}")
            else:
                print(f"{username} bet too much!")
        else:
            if wager_int <= latest_data[username][1]:
                latest_data[username][1] -= wager_int
                print(f"{username} lost {wager}")
            else:
                print(f"{username} bet too much!")
    with open(json_filename, "w") as jsonfile:
        json.dump(latest_data, jsonfile)

payout_ratio = ((24-utc_hour)/24)
for ticket in exact:
    json_filename = "C:/PyProjects/CBGame/latest_cbdata.json"
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
        for username, data in latest_data.items():
            username, cb_merits = data
        username, bet_type, bet, wager = ticket
        wager_int = int(wager)
        print(bet)
        
        if bet_int == today_price:
            if wager_int <= latest_data[username][1]:
                latest_data[username][1] += math.floor(100*payout_ratio*wager_int)
                print(f"{username} won {math.floor(100*payout_ratio*wager_int)}, now has {latest_data[username][1]}")
            else:
                print(f"{username} bet too much!")
        else:
            if wager_int <= latest_data[username][1]:
                latest_data[username][1] -= wager_int
                print(f"{username} lost {wager}")
            else:
                print(f"{username} bet too much!")
    with open(json_filename, "w") as jsonfile:
        json.dump(latest_data, jsonfile)
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Look Ahead:  Functions!

Updates:  Chart Buddy is no longer posting.  Hopefully just for now.

The Top 20 scripts have been running great.  Just 1 crash in the last 2 weeks.  I believe because I ran the script too close to UTC midnight and the Canadian $ data came back blank.  So I moved back the run time until 12:45 am instead of 12:20 am UTC.  And so far so good.

The 100 Push-Ups A Day Until Bitcoin Is $100K Challenge in Speculation has been taking up most of my free time.

The regex format is working great, when people type it correctly. Wink  Even I mess it up time to time.  There needs to be some sanity checks, like is the date from the future, or is this less pushups than the last entry.  But we've added a second table and little graph tracking everyone's pushup progress, partly inspired by LoyceV's sick BBC code merit graphs I saw the other week, the progress bar I figured out, and the animated Mango sig sky.

But now I wanted to do more with the data.  Graphs for each user.  One Total Pushups vs Time and another, Pushups per Given Report.  What to do with them though?

Main goal:  To provide individual statistics to people taking part in the 100 Push-Ups A Day Until Bitcoin Is $100K Challenge

Delivery Options:  I noticed the Tabulate Python module I'm using has an output format option of html, so in theory I could make websites for everyone.  I tested one with free hosting, but I was going to have to get everyone free SSL certificates, so I went with option B.  Our own website of websites, er basically.  I've set up an SMF forum.  I know, I know, put that in the things that could only go wrong category.  Haha!  But when all you have is a hammer...  We'll see how long the open parts of the forum last, but there is one board that only I can post the graphs in, and guests and members can view.  Then there is the On topic and Off topic boards that members can post in.  I'm ready to delete those boards if need be.

Method:  I had some co-pilot help, and I was able to modify old code for new purposes, but I've got my first 2 functions on the books.  grapher, which plots the total pushups, and graphper (looks better than grapherper Smiley ) which plots the number of pushups per each report.  I don't think I quite have the idea though.  My function is still basically a one purpose only function, or rather a function that does many things to acheive one specific task.  
A] There is a dictionary of usernames, and their forum thread numbers at the top.  
B] The function, which takes each user's CSV file that I've been appending the last 2 weeks, and calculates and graphs the total pushups, and pushups per report using matplotlib scatter and stem graphs respectively.
C] and then an for loop that runs the function for every user in the dictionary.

Actuality:  Now there is only 1 function.  I strung them back to back so only one forum post needed to be made.  I'm trying not to attact bots so I'm not going to link the forum, but I will spell it out in the Speculation thread. I removed it, and my super secret API keys from the code below.  
both_graphs code:
Code:
import csv, json, pyautogui, pyperclip, requests, time, webbrowser
import matplotlib.pyplot as plt
import numpy as np
import matplotlib as mpl
import pandas as pd
from datetime import timedelta

SMALL_SIZE = 8
MEDIUM_SIZE = 12
BIGGER_SIZE = 14

plt.rc('font', size=MEDIUM_SIZE)          # controls default text sizes
plt.rc('axes', titlesize=MEDIUM_SIZE)     # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE)    # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE)    # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE)    # fontsize of the tick labels
plt.rc('legend', fontsize=MEDIUM_SIZE)    # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE)  # title fontsize

pusher_dictionary = {"Team": 2, "OgNasty": 3, "JayJuanGee": 4, "DirtyKeyboard": 5, "I_Anime": 6, "Tmoonz": 7, "Troytech": 8, "7juju": 9, "Ambatman":  10,  "promise444c5": 11, "Gallar": 12,  "Mayor of Ogba": 13, "Zackz5000": 14, "Ricardo11": 15, "Tungbulu": 16, "Bravut": 17,  "Kwarkam": 18, "Judith87403": 19, "adultcrypto": 20, "teamsherry": 21, "Notalony": 22, "Dailyscript": 23,  "Obim34": 24, "proty": 25, "Antonil": 27, "Bd officer": 28, "SickDayIn": 29}

plt.style.use('ggplot')
def both_graphs(username):
    json_filename = "C:/PyProjects/PU/PU/latest_data.json"
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
        for pusher, data in latest_data.items():
            if pusher == username:
                username, days_reporting, pushups_completed, date = data
                list_of_pushups = []
                x = []  # list to hold dates
                y = []  # list to hold pushups_completed
                filename = f"C:/PyProjects/PU/PU/{pusher}.csv"
                df = pd.read_csv(filename)
                entries = len(df)
                if entries >= 1:
                    with open(filename, "r") as personal_csv:
                        reader = csv.reader(personal_csv)
                        
                        prev_pushups = 0  # initialize previous pushups count
                        for row in reader:
                            current_pushups = int(row[2])  # current total pushups
                            if prev_pushups != 0:  # if not the first row
                                x.append(row[3])  # append date
                                daily_pushups = current_pushups - prev_pushups  # calculate daily pushups
                                y.append(daily_pushups)  # append daily pushups
                                # list_of_pushups.append(daily_pushups[-1])
                            prev_pushups = current_pushups  # update previous pushups count
                        print(daily_pushups)  
                        x = [pd.to_datetime(d) for d in x]  # convert dates after x has been populated
                        
                        #set plot options
                        yesterday = min(x) - timedelta(1)
                        tomorrow = max(x) + timedelta(1)
                        mpl.pyplot.close()
                        fig, ax = plt.subplots(figsize=(8,8),  layout="constrained")
                        markerline, stemline, baseline, = ax.stem(x,y,linefmt='k-',markerfmt='ko',basefmt='k.')
                        plt.setp(stemline, linewidth = 2.5)
                        plt.setp(markerline, markersize = 8)
                        ax.set(xlim=(yesterday, tomorrow), ylim=(0, max(y)), yticks=np.arange(0, max(y)*1.4, 50))
                        ax.set_xlabel('Report Date')
                        ax.set_ylabel('Pushups in Report')
                        plt.savefig(f'C:/PyProjects/PU/{pusher}.png')
                        list_of_pushups = y

                        #upload and get link
                        url = "https://api.imgur.com/3/image"
                        payload = {'name': f'{pusher}'}
                        files=[('image',(f'{pusher}.png',open(f'C:/PyProjects/PU/{pusher}.png','rb'),'image/png'))]
                        headers = {'Authorization': 'Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'}
                        response = requests.post(url, headers=headers, data=payload, files=files)
                        data = response.json()
                        pusher_graph = data.get("data", {}).get("link")
                        print(pusher_graph)
                else:
                     pusher_graph = "2 data points needed"

    with open(json_filename, "r") as jsonfile:
       latest_data = json.load(jsonfile)
       for pusher, data in latest_data.items():
              if pusher == username:
                username, days_reporting, pushups_completed, date = data
                x = []  
                y = []
                with open(f"C:/PyProjects/PU/PU/{pusher}.csv", "r") as personal_csv:
                    reader = csv.reader(personal_csv)
                    for row in reader:
                        x.append(row[3])  
                        y.append(int(row[2]))  
                    x = [pd.to_datetime(d) for d in x]
                    yesterday = min(x) - timedelta(1)
                    tomorrow = max(x) + timedelta(1)
                    mpl.pyplot.close()
                    fig, ax = plt.subplots(figsize=(8,8),  layout="constrained")
                    ax.scatter(x, y, s=100)

                    #this part deals with how the axes should be scaled relative to how many pushups have been completed
                    if max(y) >= 2000:
                            y_space = 1000
                    else:
                            y_space = 100
                    ax.set(xlim=(yesterday, tomorrow), ylim=(0, max(y)*1.4), yticks=np.arange(0, max(y)*1.4, y_space))
                    ax.set_title(f"{pusher}'s Total Pushups Reported")
                    ax.set_xlabel('Report Date')
                    ax.set_ylabel('Total Pushups')
                    plt.savefig(f'C:/PyProjects/PU/{pusher}2.png')

                    # grabs the forum thread number from the dictionary
                    page = pusher_dictionary[pusher]
                    print(page)

                    # upload graph to imgur
                    url = "https://api.imgur.com/3/image"
                    payload = {'name': f'{pusher}'}
                    files=[('image',(f'{pusher}.png',open(f'C:/PyProjects/PU/{pusher}2.png','rb'),'image/png'))]
                    headers = {'Authorization': 'Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'}
                    response = requests.post(url, headers=headers, data=payload, files=files)
                    data = response.json()
                    pusher_graph2 = data.get("data", {}).get("link")
                    print(pusher_graph2)

                    # add post to clipboard
                    pyperclip.copy(f"[img]{pusher_graph}[/img]\n[img]{pusher_graph2}[/img]")

                    # can use this link for the reply button
                    url7 = f'https://pushups/index.php?action=post;topic={page}.0'
                    webbrowser.open(url7)
                    time.sleep(20)
                    pyautogui.hotkey('f11')
                    time.sleep(10)
                    pyautogui.hotkey('tab')
                    time.sleep(5)
                    pyautogui.hotkey('tab')
                    time.sleep(5)
                    pyautogui.hotkey('tab')
                    time.sleep(5)
                    pyautogui.hotkey('ctrl', 'v')
                    time.sleep(5)
                    pyautogui.hotkey('tab')
                    time.sleep(5)
                    
                    # we're doing it live if the next command is #ed out
                    # pyautogui.hotkey('tab')
                    time.sleep(5)
                    pyautogui.hotkey('enter')
                    time.sleep(5)
                    pyautogui.hotkey('f11')

for username, page in reversed(pusher_dictionary.items()):
    print(username)
    both_graphs(f"{username}")

sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Greetings and salutations, what a beautiful day for a code run, let's run two.  Grin  I love it when a plan comes together-Hannibal, A-Team.
 I'd say, I'm not touching a thing, but...the Top20 script shows the top year, month, and week of BTC vwap, only on the first day after the respective periods.  But with the 4th table the post hits the character limit.  Might have to lose the niceties and sidebars, do spaces count?

I mentioned heavily leaning on Co-Pilot for the push up code and figured someone might want to read the conversation.
Code:
## You
We've picked up another project.  My friend convinced people to exercise and report their pushups to a public forum,
then I will use Python to gather and put the data into a table.  I was looking for help to develop the regex expression search,
starting with the key 100kpushups, from the souce page of the forum.  I figured we could turn that into a csv pretty easily.  
Here is the template following by the current code, template = "format:
100kchallenge,UserName,daysreporting,pushupscompleted,Date-Reporting-In(yyyy-mm-dd)
example:
100kchallenge,DirtyKeyboard,57,5081,2024-04-06"   starting code = '''import os, pyautogui, pyperclip, re, requests, time, urllib.request, webbrowser
from datetime import datetime, timezone, timedelta
url4 = 'https://bitcointalk.org/index.php?topic=5484350.720'
time.sleep(20)
response = requests.get(url4)
turn page source of forum into pagesourcesource_code = response.text
textfile = open('C:/PyProjects/PU/Temp/PUSource.txt', 'w')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()
find matches using regex for 1 image posted today at around 00:02 UTC...pattern1 = r"100kpushups, \w{{.}}  
#then only numbers,  only numbers, and date only in yyyy-mm-dd format
print(pattern1)'''
## Copilot
Certainly! Let's create a regular expression (regex) pattern to extract the relevant information from the forum source page related to the 100kpushups challenge. Based on your template, we'll look for lines that match the following format:

100kchallenge, UserName, daysreporting, pushupscompleted, Date-Reporting-In(yyyy-mm-dd)

Here's the regex pattern you can use:

import re

# Define the regex pattern for 100kpushups
pattern = r"100kpushups, (\w+), (\d+), (\d+), (\d{4}-\d{2}-\d{2})"

# Example source text (you can replace this with the actual source code from the forum)
source_text = """
100kpushups, DirtyKeyboard, 57, 5081, 2024-04-06
100kpushups, FitGuru, 42, 3750, 2024-04-06
"""

# Find all matches using the regex pattern
matches = re.findall(pattern, source_text)

# Print the matches
for match in matches:
    print(f"Username: {match[0]}, Days Reporting: {match[1]}, Pushups Completed: {match[2]}, Date: {match[3]}")

When you run this code, it will extract the relevant data from the source_text and print it out. You can replace the source_text with the actual content from the forum page. Adjust the file paths and other details as needed for your specific use case.
Feel free to modify the pattern or adapt it further based on any variations you encounter in the forum data. Let me know if you need any additional assistance!
## You
That's great, thank you!  Instead of printing to terminal could we start a dictionary entry for each unique UserName?  Or another way might be to write the matches to a 4 column csv first?
## Copilot
Certainly! Let's organize the data either into a dictionary or write it to a CSV file.
- Dictionary Approach:
We can create a dictionary where each unique UserName is a key, and the corresponding value is a list of tuples containing (days_reporting, pushups_completed, date) for that user. Here's how you can modify the code to achieve this:

import re
import csv

# Define the regex pattern for 100kpushups
pattern = r"100kpushups, (\w+), (\d+), (\d+), (\d{4}-\d{2}-\d{2})"

# Example source text (you can replace this with the actual source code from the forum)
source_text = """
100kpushups, DirtyKeyboard, 57, 5081, 2024-04-06
100kpushups, FitGuru, 42, 3750, 2024-04-06
"""

# Initialize a dictionary to store data for each unique UserName
user_data = {}

# Find all matches using the regex pattern
matches = re.findall(pattern, source_text)

# Process each match
for match in matches:
    username, days_reporting, pushups_completed, date = match
    if username not in user_data:
        user_data[username] = []
    user_data[username].append((days_reporting, pushups_completed, date))

# Print the dictionary entries
for username, data in user_data.items():
    print(f"Username: {username}")
    for entry in data:
        print(f"  Days Reporting: {entry[0]}, Pushups Completed: {entry[1]}, Date: {entry[2]}")

# Write the data to a CSV file
csv_filename = "100kpushups_data.csv"
with open(csv_filename, "w", newline="") as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(["Username", "Days Reporting", "Pushups Completed", "Date"])
    for username, data in user_data.items():
        for entry in data:
            writer.writerow([username, entry[0], entry[1], entry[2]])

print(f"Data written to {csv_filename}")

- CSV Approach:
If you prefer a CSV file directly, we can write the matches to a 4-column CSV file. The code above already includes writing the data to a CSV file named 100kpushups_data.csv.
Feel free to choose the approach that best suits your needs!
## You
That works great.  So the dictionary is not permanent in anyway?, but I have the csv file to be appended or updated. I see a problem though, someone quoted my format in the thread and so can we deal with identical entries not being written to the csv?
## Copilot
Certainly! Let's address both concerns:
- Dictionary Approach (Non-Permanent):
The dictionary approach I provided is not permanent. It exists only during the execution of your Python script. If you close the script or restart your program, the dictionary will be lost. However, you can always recreate it by processing the data again from the source text.
- Avoiding Identical Entries in CSV:
To prevent identical entries from being written to the CSV file, we can keep track of the unique usernames encountered. If a username has already been added to the CSV, we'll skip adding it again. Here's the modified code:

import re
import csv

# Define the regex pattern for 100kpushups
pattern = r"100kpushups, (\w+), (\d+), (\d+), (\d{4}-\d{2}-\d{2})"

# Example source text (you can replace this with the actual source code from the forum)
source_text = """
100kpushups, DirtyKeyboard, 57, 5081, 2024-04-06
100kpushups, FitGuru, 42, 3750, 2024-04-06
100kpushups, DirtyKeyboard, 58, 5120, 2024-04-07
"""

# Initialize a set to store unique usernames
unique_usernames = set()

# Initialize a list to store data for writing to CSV
csv_data = []

# Find all matches using the regex pattern
matches = re.findall(pattern, source_text)

# Process each match
for match in matches:
    username, days_reporting, pushups_completed, date = match
    if username not in unique_usernames:
        unique_usernames.add(username)
        csv_data.append([username, days_reporting, pushups_completed, date])

# Write the data to a CSV file
csv_filename = "100kpushups_data.csv"
with open(csv_filename, "a", newline="") as csvfile:
    writer = csv.writer(csvfile)
    for entry in csv_data:
        writer.writerow(entry)

print(f"Data written to {csv_filename}")

In this modified code:
    - We use a set called unique_usernames to keep track of usernames encountered.
    - If a username is already in the set, we skip adding it to the CSV.
    - Otherwise, we add the username to the set and append the data to the csv_data list.
Now, the CSV file will only contain unique entries based on the usernames. Feel free to adjust the code further to fit your specific requirements!
## You
Okay, so far so good.  Now I'm trying to repurpose some for our code to rank the users by greatest pushups done.  Is this close?  '''print(f"Data written to {csv_filename}")
Read data from the CSV file for ranking pushups donerows = []
with open(csv_filename, 'r') as file:
reader = csv.reader(file)
for row in reader:
tuples = [(username, days_reporting, pushups_completed, date, i + 1) for i, (username, days_reporting, pushups_completed, date) in enumerate(rows)]
sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)'''
## Copilot
Your approach is on the right track! However, there are a couple of adjustments needed to achieve the desired ranking of users by the greatest number of pushups completed. Let's make the necessary modifications:
- Reading Data from CSV and Ranking Users:
We'll read the data from the CSV file and rank users based on the number of pushups completed. Here's the updated code snippet:

import csv

# Read data from the CSV file
csv_filename = "100kpushups_data.csv"
rows = []
with open(csv_filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        rows.append(row)

# Sort the rows by pushups completed (descending order)
sorted_rows = sorted(rows[1:], key=lambda x: int(x[2]), reverse=True)

# Print the sorted rows
print("\nRanking by Pushups Completed:")
for i, row in enumerate(sorted_rows):
    print(f"{i+1}. {row[0]}: {row[2]} pushups")

In this code:
    - We read the data from the CSV file into the rows list.
    - We sort the rows based on the number of pushups completed (column index 2) in descending order.
    - Finally, we print the sorted rows along with the user rankings.
- Handling Duplicate Usernames:
If there are duplicate usernames in the CSV (as you mentioned someone quoted your format), the code above will handle them correctly. It will rank users based on the total pushups completed, even if there are multiple entries for the same username.
Feel free to incorporate this code snippet into your existing script, and it should give you the desired ranking of users by pushups completed!
## You
Look what i added :)  '''for i, row in enumerate(sorted_rows):
print(f"{i+1}. {row[0]}: {row[2]} pushups done in {row[1]} day(s).")'''  And don't think I didn't notice that FitGuru has a slightly higher pushups per day average than I do.  Haha. :D  Was that on purpose?
## Copilot
Certainly! I see you've added a nice touch to display the rankings with the number of pushups done and the corresponding days reported.
As for FitGuru's slightly higher pushups per day average, well, it seems even in the world of pushups, there's friendly competition! FitGuru might have been secretly doing extra reps when no one was looking.  Keep up the great work, and remember, consistency is key! ️‍♂️
If you need any more assistance or have further questions, feel free to ask!
## You
Oh wow, can we use python tabulate with the pushups csv file or do we need a panda.dataframe to auto make a table?
## Copilot
Certainly! You have a couple of options for creating a table from your CSV data:
- Using tabulate:
The tabulate library is a great choice for creating formatted tables from tabular data. It works well with lists of lists (which you can extract from your CSV data). Here's how you can use it:

from tabulate import tabulate
import csv

# Read data from the CSV file
csv_filename = "100kpushups_data.csv"
rows = []
with open(csv_filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        rows.append(row)

# Print the data as a table
headers = ["Username", "Days Reporting", "Pushups Completed", "Date"]
print(tabulate(rows[1:], headers=headers, tablefmt="grid"))

Make sure you have the tabulate library installed (pip install tabulate) before running this code.
- Using pandas DataFrame:
If you prefer working with DataFrames, you can use the pandas library. It provides more advanced data manipulation capabilities. Here's an example using pandas:

import pandas as pd

# Read data from the CSV file into a DataFrame
csv_filename = "100kpushups_data.csv"
df = pd.read_csv(csv_filename)

# Print the DataFrame (optional)
print(df)

# If you want to display it as a table:
print(df.to_string(index=False))

Make sure you have the pandas library installed (pip install pandas) before running this code.
Choose the approach that best fits your needs. Both methods allow you to create nicely formatted tables from your CSV data!
## You
Amazing again.  Thank you!  But it only seems to work on the second run, which I think means during the sort it is skipping the header row, but the csv file has no headers.  Do you think we could add matching headers during this step?  '''# Process each match
for match in matches:
username, days_reporting, pushups_completed, date = match
if username not in unique_usernames:
unique_usernames.add(username)
csv_data.append([username, days_reporting, pushups_completed, date])
Write the data to a CSV filecsv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
with open(csv_filename, "a", newline="") as csvfile:
writer = csv.writer(csvfile)
for entry in csv_data:
writer.writerow(entry)'''
## Copilot
Certainly! To address the issue of skipping the header row during sorting, we can add the headers to the CSV file before writing the data. Here's the modified code snippet:

import csv

# Define the headers for the CSV file
csv_headers = ["Username", "Days Reporting", "Pushups Completed", "Date"]

# Read data from the CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
rows = []
with open(csv_filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        rows.append(row)

# Sort the rows by pushups completed (descending order)
sorted_rows = sorted(rows[1:], key=lambda x: int(x[2]), reverse=True)

# Write the data to a new CSV file with headers
new_csv_filename = "C:/PyProjects/PU/100kpushups_data_with_headers.csv"
with open(new_csv_filename, "w", newline="") as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(csv_headers)  # Write the headers
    for entry in sorted_rows:
        writer.writerow(entry)

print(f"Data written to {new_csv_filename}")

In this modified code:
- We define the csv_headers list containing the column names.
- Before writing the data to the new CSV file, we write the headers using writer.writerow(csv_headers).
- The rest of the code remains the same.
Now the new CSV file will have headers, and the sorting should work as expected. Feel free to give it a try, and let me know if you need any further assistance!
## You
Super, but this is confusing me.  Does the Terminal in Visual Studio have some persistent memory?  I ran your code and it turned out great.  But then I decide to shorten the headers to just Pushups, deleted the csv file.  Reran the script and this came out on the terminal "| Username      |   Days Reporting |   Pushups Completed | Date       |
+===============+==================+=====================+============+
| DirtyKeyboard |               57 |                5081 | 2024-04-06 |
+---------------+------------------+---------------------+------------+"  which is cool, but the headers didn't change.  But in the new csv file, the headers are changed.  What should I be doing between script runs to clear all memory before runs?
## Copilot
The behavior you're experiencing is likely due to the way Python handles file I/O and memory. Let me explain what's happening:
- File I/O and CSV Headers:
    - When you read data from a CSV file using csv.reader, it reads the entire file (including the header row) into memory as a list of lists (each inner list representing a row).
    - When you sort the rows, you're sorting the entire list (including the header row) based on the specified column.
    - When you write the sorted data back to a new CSV file, it includes the header row from the original data.
- Memory Persistence:
    - Python scripts run in a single session, and variables and data persist in memory until the script terminates.
    - When you re-run the script, any data loaded into memory during the previous run (including the header row) remains accessible.
    - Deleting the CSV file doesn't clear the data from memory; it only removes the file from disk.
- Solution:
    - To ensure that the header row is updated when you modify the headers, you can explicitly write the headers to the new CSV file before writing the data.
    - Additionally, consider using a separate variable for the data (excluding the header row) to avoid mixing it with the header during sorting.
Here's an updated version of the code that addresses these points:

import csv

# Define the headers for the CSV file
csv_headers = ["Username", "Days Reporting", "Pushups Completed", "Date"]

# Read data from the CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
rows = []
with open(csv_filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        rows.append(row)

# Sort the rows by pushups completed (excluding the header row)
sorted_rows = sorted(rows[1:], key=lambda x: int(x[2]), reverse=True)

# Write the data to a new CSV file with headers
new_csv_filename = "C:/PyProjects/PU/100kpushups_data_with_headers.csv"
with open(new_csv_filename, "w", newline="") as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(csv_headers)  # Write the headers
    for entry in sorted_rows:
        writer.writerow(entry)

print(f"Data written to {new_csv_filename}")

This updated code ensures that the header row is correctly written to the new CSV file, even if you modify the headers. Feel free to give it a try, and let me know if you have any further questions!
## You
I'm so silly sometimes.  I thought tabulate pulls the table headings from the csv, but there is a seperate input line for the table headings.  I understand now.  Thanks again!
## Copilot
No need to feel silly! Learning and understanding new concepts is all part of the process. I'm glad we got it sorted out! If you have any more questions or need further assistance, feel free to ask.
## You
I see now we lost our ranking.  :)  Can we add the rank to the csv file as we sort that data here? '''# Sort the rows by pushups completed (descending order) adding rank
sorted_rows = sorted(rows[0:], key=lambda x: int(x[2]), reverse=True)
I'm saving this for later to congratulate new results from peopleprint("\nRanking by Pushups Completed:")
for i, row in enumerate(sorted_rows):
print(f"{i+1}. {row[0]}: {row[2]} pushups done in {row[1]} day(s).")
Read data from the CSV file'''## Copilot
Certainly! To add the rank to the CSV file while sorting the data, we can modify the code as follows:

import csv

# Define the headers for the CSV file
csv_headers = ["Username", "Days Reporting", "Pushups Completed", "Date", "Rank"]

# Read data from the CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
rows = []
with open(csv_filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        rows.append(row)

# Sort the rows by pushups completed (excluding the header row)
sorted_rows = sorted(rows[1:], key=lambda x: int(x[2]), reverse=True)

# Add rank to each row
for i, row in enumerate(sorted_rows):
    row.append(i + 1)  # Rank is 1-based

# Write the data to a new CSV file with headers
new_csv_filename = "C:/PyProjects/PU/100kpushups_data_with_rank.csv"
with open(new_csv_filename, "w", newline="") as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(csv_headers)  # Write the headers
    for entry in sorted_rows:
        writer.writerow(entry)

print(f"Data with rank written to {new_csv_filename}")

In this modified code:
- We add a new column called "Rank" to the csv_headers.
- During sorting, we append the rank (1-based) to each row.
- The data is then written to a new CSV file with the updated headers.
Now the new CSV file will include the rank along with the other data. Feel free to give it a try, and let me know if you have any further questions!
## You
Is there a way to change append here with lead?  '''for i, row in enumerate(sorted_rows):
row.append(i + 1)'''    Or otherwise we need to move the csv file rank column from [4] to column [0] for the table to come out correct.  Or it would be easiest if we can tell tabulate to put the columns in this order [4] [0] [1] [2] [3] from the new_csv_filename = "C:/PyProjects/PU/100kpushups_data_with_rank.csv".  Either way would work.
## Copilot
Certainly! To add the rank as the first column (leading) in your CSV file, you can use the insert method to place it at the beginning of each row. Here's how you can modify the code:

import csv

# Define the headers for the CSV file
csv_headers = ["Rank", "Username", "Days Reporting", "Pushups Completed", "Date"]

# Read data from the CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
rows = []
with open(csv_filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        rows.append(row)

# Sort the rows by pushups completed (excluding the header row)
sorted_rows = sorted(rows[1:], key=lambda x: int(x[2]), reverse=True)

# Add rank to each row (insert at the beginning)
for i, row in enumerate(sorted_rows):
    row.insert(0, i + 1)  # Insert rank at the beginning

# Write the data to a new CSV file with headers
new_csv_filename = "C:/PyProjects/PU/100kpushups_data_with_rank.csv"
with open(new_csv_filename, "w", newline="") as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(csv_headers)  # Write the headers
    for entry in sorted_rows:
        writer.writerow(entry)

print(f"Data with rank written to {new_csv_filename}")

Now the new CSV file will have the rank as the first column. Feel free to give it a try, and let me know if you have any further questions!
## You
The ranking is working well, but there seems to be a problem.  With the matches writing to the csv.  I just posted my daily progress,
and the regex found it, but didn't put the most recent result in the brand new csvfile.  I put a print(matches) command in the middle
and this came back "[('DirtyKeyboard', '57', '5081', '2024-04-06'), ('DirtyKeyboard', '57', '5081', '2024-04-06'), ('DirtyKeyboard', '58', '5106', '2024-04-07')]"  But in the csv files there is only the first result.  Now that we have a better idea,
maybe is there a different way to do these steps or is something else wrong?
'''# Define the regex pattern for 100kchallenge
pattern = r"100kchallenge,(\w+),(\d+),(\d+),(\d{4}-\d{2}-\d{2})"
matches = re.findall(pattern, filetext)
print(matches)
input()
Initialize a set to store unique usernamesunique_usernames = set()
Initialize a list to store data for writing to CSVcsv_data = []
Process each matchfor match in matches:
username, days_reporting, pushups_completed, date = match
if username not in unique_usernames:
unique_usernames.add(username)
csv_data.append([username, days_reporting, pushups_completed, date])
Write the data to a CSV filecsv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
with open(csv_filename, "a", newline="") as csvfile:
writer = csv.writer(csvfile)
for entry in csv_data:
writer.writerow(entry)'''
## Copilot
I apologize for the inconvenience. It seems that the duplicate entries are not being filtered out correctly. Let's modify the code to ensure that only the most recent entry for each username is included in the CSV file. We'll use a dictionary to keep track of the latest data for each user. Here's the updated code:

import csv

# Define the regex pattern for 100kchallenge
pattern = r"100kchallenge,(\w+),(\d+),(\d+),(\d{4}-\d{2}-\d{2})"
matches = re.findall(pattern, filetext)

# Initialize a dictionary to store the latest data for each username
latest_data = {}

# Process each match
for match in matches:
    username, days_reporting, pushups_completed, date = match
    if username not in latest_data or date > latest_data[username][3]:
        latest_data[username] = (username, days_reporting, pushups_completed, date)

# Write the data to a CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
with open(csv_filename, "a", newline="") as csvfile:
    writer = csv.writer(csvfile)
    for entry in latest_data.values():
        writer.writerow(entry)

print(f"Data with latest entries written to {csv_filename}")

In this updated code:
- We use a dictionary called latest_data to store the most recent entry for each username.
- If a username is encountered again, we compare the date to determine whether the new entry is more recent.
- Only the latest entry for each username is included in the CSV file.
Please give this modified code a try, and let me know if it resolves the issue!
## You
That worked great, but I think because I had two matches in one forum page, and I'm the first to respond.  Do we need to save the dictionary to a file, and then reload it at the begining of the script for tomorrow's run?
## Copilot
Certainly! Saving the dictionary to a file and reloading it at the beginning of each script run is a good approach to maintain continuity across multiple days. Here's how you can modify your script to achieve this:
- Save Dictionary to File (End of Today's Run):
    - After processing the matches and creating the latest_data dictionary, save it to a file (e.g., a JSON file) using the json module.
    - This file will store the most recent data for each username.
- Reload Dictionary from File (Start of Tomorrow's Run):
    - At the beginning of tomorrow's script run, load the saved dictionary from the file.
    - Use this loaded dictionary as the starting point for processing new data.
Here's an example of how you can implement this:

import csv
import json

# ... (previous code remains unchanged)

# Save the latest_data dictionary to a JSON file
json_filename = "latest_data.json"
with open(json_filename, "w") as jsonfile:
    json.dump(latest_data, jsonfile)

print(f"Latest data saved to {json_filename}")

At the start of tomorrow's script:

import csv
import json

# Load the latest_data dictionary from the JSON file
json_filename = "latest_data.json"
with open(json_filename, "r") as jsonfile:
    latest_data = json.load(jsonfile)

# ... (rest of your script)

Remember to adjust the file paths and filenames as needed for your specific setup. This way, you'll have a persistent record of the latest data even across different script runs.
## You
Great.  I think this is gonig to be close to finished if we can figure this out.  If you can figure out how to code my comments in the following snippet would be incredible '''# include the latest page_number in the json somehow       json_filename = "latest_data.json"
with open(json_filename, "r") as jsonfile:
latest_data = json.load(jsonfile)
using page_number = latest_ page_number + 20url = f'https://bitcointalk.org/index.php?topic=5484350.{page_number}'
time.sleep(5)
response = requests.get(url)
if page_number += 20 returns a valid web page response append it to the listturn multiple page sources  into the textfile.source_code = response.text
textfile = open('C:/PyProjects/PU/Temp/PUSource.txt', 'a')
textfile.write(source_code)
when done with the loop textfile.seek(0) and move onfiletext = textfile.read()
textfile.close()
Define the regex pattern for 100kchallengepattern = r"100kchallenge,(\w+),(\d+),(\d+),(\d{4}-\d{2}-\d{2})"
matches = re.findall(pattern, filetext)'''    So we want to cycle through the web addresses, finding matches and, then adding 20 and seeing if that webpage exists, else: move on   # add latest_page_number = page_number to json file at the end, or a different file if easier.
## Copilot
Certainly! Let's break down the steps and implement them in your script:
- Include the Latest Page Number in the JSON:
    - After processing the matches and creating the latest_data dictionary, add the latest page number to it.
    - Save the updated latest_data dictionary to the JSON file.
- Cycle Through Web Addresses and Append Valid Pages:
    - Start with the latest page number plus 20 (as you mentioned).
    - Check if the URL with the new page number returns a valid web page response (status code 200).
    - If it does, append the page source to the list of all page sources.
    - Repeat this process until an invalid page response is encountered.
- Combine All Page Sources into a Single Text File:
    - After the loop, combine all page sources into a single text file (similar to what you're already doing).
Here's how you can modify your code to achieve this:

import csv
import json
import requests
import time

# Load the latest_data dictionary from the JSON file
json_filename = "latest_data.json"
with open(json_filename, "r") as jsonfile:
    latest_data = json.load(jsonfile)

# Calculate the next page number (assuming it's 20 more than the latest page number)
latest_page_number = latest_data.get("latest_page_number", 700)  # Example initial value
page_number = latest_page_number + 20

# Initialize an empty list to store page sources
all_page_sources = []

# Loop through pages
while True:
    url = f"https://bitcointalk.org/index.php?topic=5484350.{page_number}"
    time.sleep(5)
    response = requests.get(url)
    if response.status_code == 200:
        # Valid page response, append it to the list
        all_page_sources.append(response.text)
        page_number += 20
    else:
        # Invalid page response, break the loop
        break

# Combine all page sources into a single text file
with open("C:/PyProjects/PU/Temp/PUSource.txt", "a") as textfile:
    for source_code in all_page_sources:
        textfile.write(source_code)

# Read the combined source text
with open("C:/PyProjects/PU/Temp/PUSource.txt", "r") as textfile:
    filetext = textfile.read()

# Define the regex pattern for 100kchallenge
pattern = r"100kchallenge,(\w+),(\d+),(\d+),(\d{4}-\d{2}-\d{2})"
matches = re.findall(pattern, filetext)

# Initialize a dictionary to store the latest data for each username
latest_data = {}

# Process each match
for match in matches:
    username, days_reporting, pushups_completed, date = match
    if username not in latest_data or date > latest_data[username][3]:
        latest_data[username] = (username, days_reporting, pushups_completed, date)

# Save the latest_data dictionary to a JSON file
with open(json_filename, "w") as jsonfile:
    json.dump(latest_data, jsonfile)

print(f"Latest data saved to {json_filename}")

This updated code should handle cycling through web addresses, appending valid pages, and maintaining the latest page number in the JSON file. Feel free to give it a try, and let me know if you have any further questions!
## You
Hmm.  It doens't return page_number=740 as invalid, it just serves up page_number=720, again.   Oh. well.  But the json data saved great, but I think we need to load the json data here, instead of starting empty.  '''# Initialize a dictionary to store the latest data for each username
latest_data = {}
Process each matchfor match in matches:
username, days_reporting, pushups_completed, date = match
if username not in latest_data or date > latest_data[username][3]:
latest_data[username] = (username, days_reporting, pushups_completed, date)'''
## Copilot
I apologize for the oversight. You're absolutely right; we should load the existing JSON data at the beginning of the script to maintain continuity. Let's modify the code accordingly:

import csv
import json
import requests
import time

# Load the latest_data dictionary from the JSON file (if it exists)
json_filename = "latest_data.json"
try:
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
except FileNotFoundError:
    # If the file doesn't exist, start with an empty dictionary
    latest_data = {}

# Calculate the next page number (assuming it's 20 more than the latest page number)
latest_page_number = latest_data.get("latest_page_number", 700)  # Example initial value
page_number = latest_page_number + 20

# Initialize an empty list to store page sources
all_page_sources = []

# Loop through pages
while True:
    url = f"https://bitcointalk.org/index.php?topic=5484350.{page_number}"
    time.sleep(5)
    response = requests.get(url)
    if response.status_code == 200:
        # Valid page response, append it to the list
        all_page_sources.append(response.text)
        page_number += 20
    else:
        # Invalid page response, break the loop
        break

# Combine all page sources into a single text file
with open("C:/PyProjects/PU/Temp/PUSource.txt", "a") as textfile:
    for source_code in all_page_sources:
        textfile.write(source_code)

# Read the combined source text
with open("C:/PyProjects/PU/Temp/PUSource.txt", "r") as textfile:
    filetext = textfile.read()

# Define the regex pattern for 100kchallenge
pattern = r"100kchallenge,(\w+),(\d+),(\d+),(\d{4}-\d{2}-\d{2})"
matches = re.findall(pattern, filetext)

# Process each match
for match in matches:
    username, days_reporting, pushups_completed, date = match
    if username not in latest_data or date > latest_data[username][3]:
        latest_data[username] = (username, days_reporting, pushups_completed, date)

# Save the latest_data dictionary to a JSON file
with open(json_filename, "w") as jsonfile:
    json.dump(latest_data, jsonfile)

print(f"Latest data saved to {json_filename}")

This updated code will load the existing JSON data (if available) and continue from there. Thank you for catching that, and I appreciate your attention to detail!
## You
Everything is going well so far, by my friends want to know their dailly average pushups.  I was hopeful this would work, any help?  '''# Write the data to a CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
with open(csv_filename, "a", newline="") as csvfile:
writer = csv.writer(csvfile)
for entry in latest_data.values():
average = latest_data.values[2]/latest_data.values[1]
writer.writerow(entry,average)
## Copilot
I apologize for the oversight in the code snippet. To calculate the daily average pushups for each user and write it to the CSV file, you can modify the code as follows:

import csv
import json

# Load the latest_data dictionary from the JSON file (if it exists)
json_filename = "latest_data.json"
try:
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
except FileNotFoundError:
    # If the file doesn't exist, start with an empty dictionary
    latest_data = {}

# Calculate the average pushups per day for each user
for username, data in latest_data.items():
    days_reporting, pushups_completed, _ = data[1:]  # Ignore the date
    average_pushups = int(pushups_completed) / int(days_reporting)
    latest_data[username] = (*data, average_pushups)

# Write the updated latest_data dictionary to a JSON file
with open(json_filename, "w") as jsonfile:
    json.dump(latest_data, jsonfile)

print(f"Latest data with average pushups saved to {json_filename}")

In this updated code:
- We calculate the average pushups per day for each user by dividing the total pushups completed by the number of days reporting.
- The average pushups value is added to the existing data for each user.
- The updated latest_data dictionary is saved back to the JSON file.
Feel free to give this modified code a try, and let me know if you have any further questions!
## You
So then this snippet with average_pushups included would go next?  '''# Process each match
for match in matches:
username, days_reporting, pushups_completed, date = match
if username not in latest_data or date > latest_data[username][3]:
latest_data[username] = (username, days_reporting, pushups_completed, date, average_pushups)   # Write the data to a CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
with open(csv_filename, "a", newline="") as csvfile:
writer = csv.writer(csvfile)
for entry in latest_data.values():
writer.writerow(entry)
print(f"Data written to {csv_filename}")'''
## Copilot
Certainly! Let's include the average_pushups value in the latest_data dictionary and write it to the CSV file. Here's the modified snippet:

# Calculate the average pushups per day for each user
for username, data in latest_data.items():
    days_reporting, pushups_completed, _ = data[1:]  # Ignore the date
    average_pushups = int(pushups_completed) / int(days_reporting)
    latest_data[username] = (*data, average_pushups)

# Write the data to a CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
with open(csv_filename, "a", newline="") as csvfile:
    writer = csv.writer(csvfile)
    for entry in latest_data.values():
        writer.writerow(entry)

print(f"Data with average pushups written to {csv_filename}")

In this updated code:
- We calculate the average_pushups value for each user.
- The latest_data dictionary is updated to include the average pushups value.
- The data (including the average) is written to the CSV file.
Feel free to give this modified code a try, and let me know if you have any further questions!
## You
I think we should leave the average pushups out of the json file.  Can we follow my comments here, and calculate 'average = entry[2]/entry[1]' and put it in the row?  '''# read data from csv, and calculate average pushups per day
rows = []
with open(csv_filename, 'r') as file:
reader = csv.reader(file)
for row in reader:
# calculate average pushups per day place it in the row
# average = entry[2]/entry[1]
rows.append(row)
Sort the rows by pushups completed (descending order)sorted_rows = sorted(rows[0:], key=lambda x: int(x[2]), reverse=True)"
## Copilot
Certainly! Let's calculate the average pushups per day for each user and include it in the row. Here's the modified snippet:

import csv

# Read data from the CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
rows = []
with open(csv_filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        # Calculate average pushups per day
        days_reporting, pushups_completed = int(row[1]), int(row[2])
        average_pushups = pushups_completed / days_reporting
        # Append the average to the row
        row.append(average_pushups)
        rows.append(row)

# Sort the rows by pushups completed (descending order)
sorted_rows = sorted(rows[1:], key=lambda x: int(x[2]), reverse=True)

# Write the data to a new CSV file with headers
new_csv_filename = "C:/PyProjects/PU/100kpushups_data_with_average.csv"
with open(new_csv_filename, "w", newline="") as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(["Username", "Days Reporting", "Pushups Completed", "Date", "Average Pushups"])
    for entry in sorted_rows:
        writer.writerow(entry)

print(f"Data with average pushups written to {new_csv_filename}")

In this updated code:
- We calculate the average_pushups value for each user based on the total pushups completed and the number of days reporting.
- The average pushups value is appended to the existing data for each user.
- The data (including the average) is written to the new CSV file.
Feel free to give this modified code a try, and let me know if you have any further questions!
Haha, I messed up my own format, with first saying 100kpushups, then correctly saying 100kchallenge.  I have now changed teh regex to accept only 100k or 100k plus any 9 word characters to handle mispellings.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Missed it by that much.  {editor: that was supposed to be 6 pt font, but we'll let it run because reasons}  Undecided
I changed the Task Schedule times, and the scripts overlapped.  So the ChartBuddy script ran great starting at 00:15 UTC, and then the Top20 script started at 00:20 UTC, which is not enough of a delay.  What was I thinking?  

I checked the status from my phone at  00:30 UTC and was happy to see the recap posted, but the script that had been flawless the week prior didn't work.  I could see through my phone the terminal had passed the message "No Table 44U" which happens when it is not a new year, month, or week, so don't post those stats.  Wow this got complicated.

So when I figured out how to 'Ctrl'-'v' over the phone, the ChartBuddy post showed up.  Well, short story long, I got home, and simply had to rerun the parts of the script that aren't downloaded.  Part of the script downloads the data and formats the columns, the other part of the script takes the columns and makes the tables.  Updated main table maker with date choice posted at end.

For the newest pushup counter script I went head long in to copy paste mode with Copilot and came up with this last night, plus one change by me.  Smiley  The tabulate python script wasn't accepting the decimal place inputs i believe I was inputting.  The average pushups column kept coming up with too many decimal places.  Well I'm chuffed i was able to make this change to the script to have it look how I wanted.
Code:
average_pushup = pushups_completed / days_reporting
        average_pushups = f"{average_pushup:.2f}"
The copilot suggested code was just passing along the average_pushups variable originally, which I renamed average_pushup, so I could modify the variable average_pushups and not have to change anything else.
Pushup script
Code:
import csv, json, os, pyautogui, pyperclip, re, requests, time, urllib.request, webbrowser
from datetime import datetime, timezone, timedelta
from os import rename
from tabulate import tabulate

# set UTC dates for links and new folder, so 'today' is around 30 minutes old when this is run
today = datetime.now(timezone.utc)
yesterday = today - timedelta(1)
print(today)

json_filename = "C:/PyProjects/PU/PU/latest_data.json"
with open(json_filename, "r") as jsonfile:
    latest_data = json.load(jsonfile)

url = 'https://bitcointalk.org/index.php?topic=5484350.740'
time.sleep(5)
response = requests.get(url)

# turn page source of Pushups page into textfile.
source_code = response.text
textfile = open('C:/PyProjects/PU/PUSource.txt', 'w+', encoding="UTF-8")
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

# Define the regex pattern for 100kchallenge
pattern = r"100kchallenge,(\w+),(\d+),(\d+),(\d{4}-\d{2}-\d{2})"
matches = re.findall(pattern, filetext)
print(matches)
# input()
csv_data = []

# Load the latest_data dictionary from the JSON file (if it exists)
json_filename = "C:/PyProjects/PU/PU/latest_data.json"
try:
    with open(json_filename, "r") as jsonfile:
        latest_data = json.load(jsonfile)
except FileNotFoundError:
    # If the file doesn't exist, start with an empty dictionary
    latest_data = {}
    

# for username, data in latest_data.items():
#     days_reporting, pushups_completed, _, = data[1:]  # Ignore the date
#     average_pushups = int(pushups_completed) / int(days_reporting)
#     latest_data[username] = (*data, average_pushups)

# Process each match
for match in matches:
    username, days_reporting, pushups_completed, date = match
    if username not in latest_data or date > latest_data[username][3]:
        latest_data[username] = (username, days_reporting, pushups_completed, date)

# Write the data to a CSV file
csv_filename = "C:/PyProjects/PU/100kpushups_data.csv"
with open(csv_filename, "a", newline="") as csvfile:
    writer = csv.writer(csvfile)
    for entry in latest_data.values():
        writer.writerow(entry)

print(f"Data written to {csv_filename}")
# Read data from the CSV file

json_filename = "C:/PyProjects/PU/PU/latest_data.json"
with open(json_filename, "w") as jsonfile:
    json.dump(latest_data, jsonfile)

print(f"Latest data saved to {json_filename}")

# read data from csv, and calculate average pushups per day
rows = []
with open(csv_filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        # calculate average pushups per day place it in the row
        days_reporting, pushups_completed = int(row[1]), int(row[2])
        average_pushup = pushups_completed / days_reporting
        average_pushups = f"{average_pushup:.2f}"
        row.append(average_pushups)
        rows.append(row)

# Sort the rows by pushups completed (descending order)
sorted_rows = sorted(rows[0:], key=lambda x: int(x[2]), reverse=True)

for i, row in enumerate(sorted_rows):
    row.insert(0, i + 1)

new_csv_filename = "C:/PyProjects/PU/100kpushups_rank.csv"
with open(new_csv_filename, "w", newline="") as csvfile:
    writer = csv.writer(csvfile)
    for entry in sorted_rows:
        writer.writerow(entry)


rows = []
with open(new_csv_filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        rows.append(row)

# Print the data as a table
headers = ["Order", "Username", "Days In", "Pushups", "Last Date", "Daily Average"]
print(tabulate(rows[0:], headers=headers, tablefmt="rounded_grid"))
post = (tabulate(rows[0:], headers=headers, tablefmt="rounded_grid"))
pyperclip.copy(f"[pre]{post}[/pre]")













###########                            ☰☰☰        ▟█
# The End #                       ☰☰☰☰☰☰  ⚞▇▇▇██▛(°)>
###########                            ☰☰☰        ▜█
Current Top20 Main Tablemaker code with optional 4th table highlighting the all time Top20 Years, Months, and Weeks of BTC
Code:
if todayutc.isoweekday() == 1 or todayutc.day == 1 or todayutc.month == 1 and todayutc.day == 1:
        import t4_table
Code:
import datetime, pyautogui, pyperclip, shutil, time, webbrowser
from datetime import datetime, timezone

todayutc = datetime.now(timezone.utc)

# sets a sleep timer for faster runs during testing
sleep_time = 15

# calls the scripts which creat each column to be zipped together below
import Top20USD
time.sleep(sleep_time)
import Top20GBP
time.sleep(sleep_time)
import Top20EUR
time.sleep(sleep_time)
import Top20CAD
time.sleep(sleep_time)
import Top20JPY
time.sleep(sleep_time)

# location of the columns created above
usd = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_USD/top100usd.txt'
gbp = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_GBP/top100gbp.txt'
eur = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_EUR/top100eur.txt'
cad = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_CAD/top100cad.txt'
jpy = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_JPY/top100jpy.txt'

file_paths = [usd, gbp, eur, cad, jpy]
print("importingfiles")
with open('C:/PyProjects/Top20/Top20_Total_03_24/top20table.txt', 'w', encoding='utf-8') as output_file:

    # Read lines from all files simultaneously
    with open(file_paths[0], 'r', encoding='utf-8') as file1, \
            open(file_paths[1], 'r', encoding='utf-8') as file2, \
            open(file_paths[2], 'r', encoding='utf-8') as file3, \
            open(file_paths[3], 'r', encoding='utf-8') as file4, \
            open(file_paths[4], 'r', encoding='utf-8') as file5:
        
        for line1, line2, line3, line4, line5 in zip(file1, file2, file3, file4, file5):

            # Write combined lines to the output file
            output_line = f"| {line1.strip()}| {line2.strip()}| {line3.strip()}| {line4.strip()}| {line5.strip()}|\n"
            output_file.write(output_line)
# progress check
print("clipboarding")

# putting the trad5 part together, setting links as variables to limit code line length
with open('C:/PyProjects/Top20/Top20_Total_03_24/Top20table.txt', 'r') as top100:
    ul = 'https://bitcoincharts.com/charts/bitstampUSD'
    gl = 'https://data.bitcoinity.org/markets/volume/5y/GBP/kraken?r=day&t=b'
    el = 'https://data.bitcoinity.org/markets/volume/5y/EUR/kraken?r=day&t=b'
    cl = 'https://data.bitcoinity.org/markets/volume/5y/CAD/kraken?r=day&t=b'
    jl = 'https://data.bitcoinity.org/markets/volume/5y/JPY/kraken?r=day&t=b'
    list = top100.read()
    pre = f"[pre]   US Dollar              British Pound          EU Euro                Canadian Dollar        Japanese Yen"
    prelude = f"[size=9pt][b]| [url={ul}]Rank BitStamp   USD/BTC[/url]| [url={gl}]Rank Kraken     GBP/BTC[/url]| [url={el}]Rank Kraken     EUR/BTC[/url]| [url={cl}]Rank Kraken     CAD/BTC[/url]| [url={jl}]Rank Kraken        JPY/BTC[/url]|[/b]"
    explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391]|                                                    *  Chart  Legends  *                                                       |[/url]"
    JimboToronto = "|                                                          GoBTCGo™                                                             |[/pre]"
    full_post1 = f"{pre}\n{prelude}\n{list}{explanation}\n{JimboToronto}"

# on to the next set of tablesk, this script calls 4 seperate currency scripts
import tm_T2_03_21

# making the second table
with open('C:/PyProjects/Top20/Top20_Total_03_24/Top20tableT2.txt', 'r') as top100T2:
  
    il = 'https://data.bitcoinity.org/markets/volume/5y/IDR/bitcoin.co.id?r=day&t=b'
    kl = 'https://data.bitcoinity.org/markets/volume/5y/KRW/korbit?r=day&t=b'
    al = 'https://data.bitcoinity.org/markets/volume/5y/AUD/btcmarkets?r=day&t=b'
    ual = 'https://data.bitcoinity.org/markets/volume/5y/UAH/exmo?r=day&t=b'
    bv = 'https://data.bitcoinity.org/markets/volume/5y/IDR/bitcoin.co.id?r=day&t=b'
    listT2 = top100T2.read()
    pre = f"[pre]  Korean Wan               Australian Dollar    Ukrainian Hryvnia      Indonesian Rupiah          Bitstamp Exchange"
    prelude = f"[size=9pt][b]| [url=https://data.bitcoinity.org/markets/volume/5y/KRW/korbit?r=day&t=b]#  Korbit         KRW/BTC[/url]| [url=https://data.bitcoinity.org/markets/volume/5y/AUD/btcmarkets?r=day&t=b]#  btcmarkets AUD/BTC[/url]| [url=https://data.bitcoinity.org/markets/volume/5y/UAH/exmo?r=day&t=b]#  exmo         UAH/BTC[/url]| [url=https://data.bitcoinity.org/markets/volume/5y/IDR/bitcoin.co.id?r=day&t=b]#  bitcoin.co.id    IDR/BTC[/url]| [url=https://bitcoincharts.com/charts/bitstampUSD]#  BTC vol  in MUSD[/url]|[/b]"
    explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391]|                                                    *  Chart  Legends  *                                                    |[/url]"
    JimboToronto = "|                                                          GoBTCGo™                                                          |[/pre]"
    full_post2 = f"{pre}\n{prelude}\n{listT2}{explanation}\n{JimboToronto}"

# the set of newest moving day average columns
import tm_T3_03_30

with open('C:/PyProjects/Top20/Top20_Total_03_24/Top20tableT3.txt', 'r') as top100T3:
    ul = 'https://bitcoincharts.com/charts/bitstampUSD'
    listT3 = top100T3.read()
    pre = f"[pre]Moving Average VWAP, date listed is final day in span\n|  7 Day MA              30 Day MA            200 Day MA            100 Week MA            200 Week MA         |"
    prelude = f"[size=9pt][b]|  [url={ul}]#      7 DMA  USD/BTC[/url]|  [url={ul}]#      30 DMA USD/BTC[/url]| [url={ul}]#      200 DMA USD/BTC[/url]| [url={ul}]#      100 WMA USD/BTC[/url]|  [url={ul}]#     200 WMA USD/BTC[/url]|[/b]"
    explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391]|                                                  *  Chart  Legends  *                                                 |[/url]"
    JimboToronto = "|                                                        GoBTCGo™                                                       |[/pre]"
    full_post3 = f"{pre}\n{prelude}\n{listT3}{explanation}\n{JimboToronto}"

    if todayutc.isoweekday() == 1 or todayutc.day == 1 or todayutc.month == 1 and todayutc.day == 1:
        import t4_table

        with open('C:/PyProjects/Top20/Top20_Total_03_24/Top20tableT4.txt', 'r') as top100T4:
            ul = 'https://bitcoincharts.com/charts/bitstampUSD'
            listT4 = top100T4.read()
            pre = f"[pre]It must be a new Year, Month, or Week!\nVWAP for Calendar ranges = sum of vwap for days in range / # of days in range.\n  Top Year        Top Month          Top Week ending on"
            prelude = f"[size=9pt][b]| [url={ul}]#        USD/BTC[/url]| [url={ul}]#           USD/BTC[/url]| [url={ul}]#              USD/BTC[/url]|[/b]"
            explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391]|                     *  Chart  Legends  *                     |[/url]"
            JimboToronto = "|                           GoBTCGo™                           |[/pre]"
            full_post4 = f"{pre}\n{prelude}\n{listT4}{explanation}\n{JimboToronto}"
            full_post = f"Daily [BTC] Volume Weighted Average Prices\n{full_post1}\n{full_post2}\n{full_post3}\n{full_post4}"
    else:
        print("No table 44U!")
        full_post = f"Daily [BTC] Volume Weighted Average Prices\n{full_post1}\n{full_post2}\n{full_post3}"
    
    pyperclip.copy(full_post)

    # can use this link for the reply page to top20 thread
    url = 'https://bitcointalk.org/index.php?action=post;topic=138109.0'
    webbrowser.open(url)
    time.sleep(60)
    pyautogui.hotkey('f11')
    time.sleep(20)
    pyautogui.hotkey('tab')
    time.sleep(2)
    pyautogui.hotkey('tab')
    time.sleep(2)
    pyautogui.hotkey('ctrl', 'v')
    time.sleep(2)
    pyautogui.hotkey('tab')
    time.sleep(2)
    # we're doing it live if the next command is #ed out
    # pyautogui.hotkey('tab')
    time.sleep(5)
    pyautogui.hotkey('enter')
    time.sleep(60)
    pyautogui.hotkey('f11')
    
# if i don't delete the cache each time, the minor changes I make don't show up the next run, i think
shutil.rmtree('C:/PyProjects/Top20/Top20_Total_03_24/__pycache__')


















# ###########                            ☰☰☰         ▟██
# # The End #                       ☰☰☰☰☰☰  ⚞▇▇▇▋███▎▎(⚬)>
# ###########                            ☰☰☰         ▜██
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
All righty then.  I've got ChartBuddy with extended pauses ready to go. I've got the 4th table in the Top20 Days of Bitcoin thread ready to go, and have put the newest data back, unlike yesterday.  I don't have the Mango signature totally figured out, and it's breaking my brain.  I can not figure out why some hours engage the 'display' and some just fill it with sky_color.  Here is the rising moon in the 'display' portion of the sig.

███████████████████████████████████████████            ▟██           ████████████
████████████████████████████████████████        ⚞▇▇▋███▎▎()>       ██████████
███████████████████████████████████████████            ▜██           ████████████ 
If you scroll to the end you can see the two white 'pixels'.  This is a working example, but most of the rest just take the sky_color variable, and ignore the color command if present.
Code:
if utcu == 20:
    a0 = sky_color; a1 = sky_color; a2 = sky_color; a3 = sky_color; a4 = sky_color; a5 = sky_color; a6 = sky_color
    b0 = sky_color; b1 = sky_color; b2 = sky_color; b3 = sky_color; b4 = sky_color; b5 = sky_color; b6 = sky_color
    c0 = sky_color; c1 = sky_color; c2 = sky_color; c3 = "white"; c4 = "white"; c5 = sky_color; c6 = sky_color
Code:
[b][color=#114054]███████████████████████████████████████████[/color]            [color=green]▟██[/color]           [color=#114054]██████████[/color][/b][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]██[/color]
[b][color=#114054]████████████████████████████████████████[/color]  [color=green]      ⚞[/color][color=#0197d2]▇▇[/color][color=green]▋███[/color][color=#e7cc3d]▎▎[/color][color=#dd7e3e]([/color]⚬[color=orange])[/color]>       [color=#114054]████████[/color][/b][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]██[/color]
[b][color=#114054]███████████████████████████████████████████[/color]            [color=green]▜██[/color]           [color=#114054]██████████[/color][/b][color=#114054]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=white]█[/color][color=white]█[/color][color=#114054]█[/color][color=#114054]█[/color][color=#114054]██[/color]

This and the next few hours work, but I've got more code checking to do.

So I've got that to chew on, and apparently I'm going to try get a script to scrape the 100 Push-Ups A Day Until Bitcoin Is $100K Challenge thread, to keep track of everyones' pushups.  It sounds like how I search for ChartBuddy's posts in the Page Source, I should be able to key in on this data as well. I've asked them to post like
"format:
100kchallenge,UserName,daysreporting,pushupscompleted

example:
100kchallenge,JayJuanGee,63,10550"
I'm sure that is going to end well.   Cheesy  And now JayJuanGee wants to include the date, that sure simplifies things.  Smiley  Come join us everyone!
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
It's funny how things work sometimes, or don't work rather.  All week long I was ready to go full automated posting.  I've set up a laptop as a 'home posting machine', for if I'm out on the road at UTC midnight, when it all goes down.  Wink  Top20 list at 00:20 UTC, Buddy recap at 00:30 UTC  I can remote into the laptop using my phone, but it's tough to do much.  So Monday, was just lucky somehow, maybe because I rebooted Sunday night, but the laptop had a banner day.  See, this is what happens to the big developers.  They code the game on a supercomputer, and then expect it to run on general hardware.  The ChartBuddy recap scrip which has to load 24 images into GIMP was written on a computer with an SSD, while the laptop is on a much slower spinning disk drive.  So the overabundance of caution giving that step 20 seconds, to happen with the SSD, proved enough time, just once for the laptop.  We're kinda jumping around though, becuase I just figured out that was the problem today.  Right here is press to start loading images, making gifs, and then exporting.  It's worse than I thought. Smiley
Code:
pyautogui.hotkey('enter')
time.sleep(20)
# sequence to quit, and agree not to save
pyautogui.hotkey('ctrl', 'q')
I have since increased the time to 120 seconds, not 20.  
So Monday: shortly before the end of the hour, I checked on how it went with my phone.  I was very happy to see at least the CB Recap went off, I squinted a little closer at my phone and saw the Top20 list hadn't crashed, it was waiting for an input.  I remembered that I put in there to pause the code during error checking the previous night.  I was able to, click into the terminal and send an 'enter' over the wires, to have the script proceed and just made it under the wire.  What a world. Smiley  The end of the hour deadline is self imposed, but it would make things messy to run the ChartBuddy script an hour late.

Tuesday:  I had fixed the Top20 input problem, and that went off perfectly.  But then, you guessed it, the CB script threw a wierd error about how the help file wasn't installed and I need to choose the online version in the preferences or download a local copy.  It was like an ominous threat.  But thankfully the images had successfully downloaded and the gifs made.  I just reproduced the error and what happens is the files start loading, but hitting quit opens another menu.  The gifs get made, but GIMP never closes and all the pyautogui I use to post the post fall apart, because GIMP is opened fullscreen.  Luckily I've been in the crashing business for some time, and I knew that if I could somehow only run the second half of the script it should work fine.  Luckily the phone touch screen is okay at 'clicking and dragging' so, knowing I had backups, I deleted the already done steps, hit File, Save, and reran it successfully.  

Wednesday, Thursday, Friday:  Same story.  Autoposting rate 5/10 successful auto launches to completion, but I would say a solid 7.5 if giving partial credit. Cheesy  10/10 deadlines met.

Saturday:  Do you see it coming?  I do some practice runs with the ChartBuddy script and notice how I was putting the bigger time.sleep in the wrong place.  I had it after pressing the keys to load the plug in, not the key to run the plug in.  Then I got to work changing the script that had worked all week.  In fairness, I was waiting til the weekend to change some tables, but 3 minutes before go time is not the time to be doing the final save before run.  See what had happened was, to test the Top20 BTC weeks Monday to Monday, months and years, it would just be easiest to move the incomplete data from this week, month, and year's partial data, run the script and then put the data back.  I then set those new scripts to only run if it is the first day of a new time period.  Except you don't put the data back, and you get bad tables.  Luckily my experience with crashing allowed me to grab the latest backup of all the files, fix a few things and rerun it from home base, 7 minutes late.  50 % Saturday grade.  I need this upcoming 100 % tomorrow for confidence going into next week.  Difficulty:  I want another new row of tables tomorrow.

I've got a new project as a tribute to my late friend Mango.  It's bringing me joy, and I'm looking forward to my next rank increase, when I think i can make the 'pixels' in my signature smaller than they are now.  But I've got my signature auto updating every hour.  Mango gets to fly across the screen everyday, and some fun stuff happens along the way occasionally.  As the sky grows behind him, it shrinks ahead. At the end of the signature  I set up a 3 by 7 'display' that is fairly easy to update.  The following is a blank display.  By changing sky_color to "green", that pixel will be green.
Code:
if utch == 18 or 19: # sky_color is a call to a dictionary of changing colors depending on hour of day
    a0 = sky_color; a1 = sky_color; a2 = sky_color; a3 = sky_color; a4 = sky_color; a5 = sky_color; a6 = sky_color
    b0 = sky_color; b1 = sky_color; b2 = sky_color; b3 = sky_color; b4 = sky_color; b5 = sky_color; b6 = sky_color
    c0 = sky_color; c1 = sky_color; c2 = sky_color; c3 = sky_color; c4 = sky_color; c5 = sky_color; c6 = sky_color

top_line = f"[b][color={sky_color}]{'█' * 2 * utch}███[/color]Mango      ####below
mid_line = f"[b][color={sky_color}]{'█' * 2 * utch}[/color]      Ascii
low_line = f"[b][color={sky_color}]{'█' * 2 * utch}███[/color]Art

[color={sky_color}]{'█' * (50 - (2 * utch))}[/color][/b]  #### then the display this is all on the same line actually

[color={a0}]█[/color][color={a1}]█[/color][color={a2}]█[/color][color={a3}]█[/color][color={a4}]█[/color][color={a5}]█[/color][color={a6}]█[/color][color={sky_color}]██[/color]"

I promised the latest ChartBuddy code last post I think, so here it is.  I think it's good for awhile.  I would be a lot less nervous if I could get the plug in to run with out the GIMP GUI involved, but all efforts have been in vain, although now...maybe...the headless batch command is not working because I need a longer pause in the plug in itself?
Secret Keys Removed Smiley
Code:
import os, pyautogui, pyperclip, re, requests, time, urllib.request, webbrowser
from datetime import datetime, timezone, timedelta
from os import rename

startTime = time.perf_counter()

# set UTC dates for links and new folder, so 'today' is around 30 minutes old when this is run
today = datetime.now(timezone.utc)
yesterday = today - timedelta(1)
print(today)

# get the final 20 gif layers in reverse order, starting with 24, if there is no cloudfare
number = 24
url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'
time.sleep(20)
response = requests.get(url4)

# turn page source of ChartBuddy's latest posts page into textfile.
source_code = response.text
textfile = open('C:/PyProjects/CB/Temp/CBSource.txt', 'w+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

# find matches using regex for 1 image posted today at around 00:02 UTC...
pattern1 = r'https:\/\/www\.talkimg\.com\/images\/\w{{4}}\/\w{{2}}\/{0}\/\w{{5}}\.png'.format(today.strftime('%d'))
print(pattern1)

# and from yesterday, set a counter for total downloads, plenty of copilot help with my regex
pattern2 = r'https:\/\/www\.talkimg\.com\/images\/\w{{4}}\/\w{{2}}\/{}\/\w{{5}}\.png'.format(yesterday.strftime('%d'))
print(pattern2)
matches = re.findall(pattern1, filetext) + re.findall(pattern2, filetext)

# python is so cool you can just cycle though stuff, without setting counters...
total_dl =  0
for link in matches:
    dl_number = f"{number:02d}"
    total_dl += 1

    # status update, and the GIMP plug in wants these names, will rename below
    print(number, link)
    urllib.request.urlretrieve(link, 'C:/PyProjects/CB/images/download ({}).png'.format(dl_number))
    number = number - 1

    # wait between downloads to not stress talkimg.com.  I did work for a donation to them :)
    time.sleep(5)

# something I'm saving, why not?  might be useful one day...
os.remove('C:/PyProjects/CB/Temp/CBSource.txt')
print("going on")

# get the first 4 images on the next page, which are the last frames of the gif
url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20'
time.sleep(20)
response5 = requests.get(url5)
source_code = response5.text
textfile5 = open('C:/PyProjects/CB/Temp/CBSource2.txt', 'a+')
textfile5.write(source_code)
textfile5.seek(0)
filetext2 = textfile5.read()
textfile5.close()

# these could only be from yesterday, since 'today' is only 30 minutes old
matches = re.findall(pattern2, filetext2)
for link in matches:
    if number >= 1:
        dl_number = f"{number:02d}"
        total_dl += 1
        print(number, link)
        urllib.request.urlretrieve(link, 'C:/PyProjects/CB/images/download ({}).png'.format(dl_number))
        number = number - 1
        time.sleep(2)
    if number <= 0:
        print("skipping link")
os.remove('C:/PyProjects/CB/Temp/CBSource2.txt')
time.sleep(5)

# hot keys for gif making, main monitor focusing click
pyautogui.click(1, 1)
time.sleep(5)

# open GIMP, be paitient
pyautogui.hotkey('ctrl', 'alt', 'g')
time.sleep(60)
pyautogui.hotkey('f11')
time.sleep(20)

# click on middle of GIMP window to be sure
pyautogui.click(820, 446)
time.sleep(20)

# hotkeys to plugin that loads layers, exports, resizes
pyautogui.hotkey('ctrl', 'alt', 'l')
time.sleep(20)

# need to learn GUIless GIMP, keys to finalize plug
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('enter')
time.sleep(120)

# sequence to quit, and agree not to save
pyautogui.hotkey('ctrl', 'q')
time.sleep(10)
pyautogui.hotkey('shift', 'tab')
time.sleep(5)
pyautogui.hotkey('enter')
time.sleep(60)

# uploading big gif and getting link to use later,
url = "https://api.imgur.com/3/image"
payload = {'name': f'{yesterday:%m}-{yesterday:%d}-{yesterday.year}'}
files=[('image',('gif.gif',open('C:/PyProjects/CB/Temp/gif.gif','rb'),'image/gif'))]
headers = {'Authorization': 'Bearer xxxxxxxxxxxxxxxxxxxx'}
response = requests.post(url, headers=headers, data=payload, files=files)
data = response.json()
imgur_big_gif = data.get("data", {}).get("link")

# uploading talkimg gif and getting link to use later
url = "https://talkimg.com/api/1/upload"
headers = {"X-API-Key": "xxxxxxxxxxxxxxxxxxxxxxxxxxx"}
files = {"source": open("C:/PyProjects/CB/Temp/gif2.gif", "rb")}
payload = {"title": f'{yesterday:%m}-{yesterday:%d}-{yesterday.year}', "album_id": "UFbj"}
response = requests.post(url, headers=headers, data=payload, files=files)
data = response.json()
talkimg_gif = data["image"]["url"]

# add post to clipboard for btctalk
pyperclip.copy(f"ChartBuddy's Daily Wall Observation recap\n[url={imgur_big_gif + "v"}].[img]{talkimg_gif}[/img].[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")

# can use this link for the reply button, keys to get to post entry box
url7 = 'https://bitcointalk.org/index.php?action=post;topic=178336.0'
webbrowser.open(url7)
time.sleep(60)
pyautogui.hotkey('f11')
time.sleep(20)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)

# paste the post and head to post or preview button
pyautogui.hotkey('ctrl', 'v')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)

# we're doing it live if the next command is #ed out
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('enter')
time.sleep(5)
pyautogui.hotkey('f11')
time.sleep(20)

# name new image folder with date, if not already created
directory = f"{yesterday.year}/{yesterday.year}-{yesterday:%m}/"
parent_dir = "C:/PyProjects/CB/CBuddyDaily/"
newf = os.path.join(parent_dir, directory)
present = os.path.exists(newf)
if not newf:
    os.mkdir(newf)
src = "C:/PyProjects/CB/images/"
files = os.listdir(src)
os.chdir(src)
image = 1
for file in files:

    # this names the images yyyy-mm-dd (01) through (24), making the most recent image hour 24 because its the last hour, not the first hour
    name = os.path.join(src, file)
    rename (name, f"C:/PyProjects/CB/CBuddyDaily/{yesterday.year}/{yesterday.year}-{yesterday:%m}/{yesterday.year}-{yesterday:%m}-{yesterday:%d}({image:02d}).png")
    image += 1

giff = f"C:/PyProjects/CB/gifs/{yesterday.year}/{yesterday.month}-{yesterday.year}/"
present = os.path.exists(giff)
if not giff:
    os.mkdir(giff)
rename ("C:/PyProjects/CB/Temp/gif.gif", f"C:/PyProjects/CB/gifs/{yesterday.year}/{yesterday.month}-{yesterday.year}/b{yesterday:%y}-{yesterday:%m}-{yesterday:%d}.gif")
rename (f"C:/PyProjects/CB/Temp/gif2.gif", f"C:/PyProjects/CB/gifs/{yesterday.year}/{yesterday.month}-{yesterday.year}/{yesterday:%y}-{yesterday:%m}-{yesterday:%d}.gif")




###########                           ☰☰☰             ▟█
# The End                              ☰☰☰☰☰☰  ⚞▇▇▇██▛(°)>
###########                           ☰☰☰             ▜█

sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
what to do with missing frames
Maybe make the previous frame gray? That way, it's still clear the hour passed, but the data is missing. Or a graph without data.
The graph with no data sounds cool.  I think I've previously seen a blank ChartBuddy graph posted like you describe.  Maybe grey with a moonlay prototype, mentioned above, frame inserted.  Exciting times for sure.  Next rank I think I can ensmallen my signature characters.  We'll see.  FlyMangoFly
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
what to do with missing frames
Maybe make the previous frame gray? That way, it's still clear the hour passed, but the data is missing. Or a graph without data.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Ok.  So.  Here's where we're at.  I have moved to posting ChartBuddy's daily recap at UTC midnight.  Either with my phone, from the road, using Chrome Remote to a home laptop, or I get Task Scheduler to work more consistently tomorrow.

Storylog:
1.Download I have changed, with help, the regular expression to download the previous day's images only.  Whether that is zero or 24.  Next would be what to do with missing frames, and what the minimum frame count would be to even do a recap that day.  This also allows a variable, total_downloads to be set for later commands.  I need to somehow pass the information to the GIMP plugin also.

2.Import  I'm going to try to save the total_downloads variable to a file, so that the gimp plug in can load in the correct number of frames for the gif.  Or just run the same code inside the GIMP plugin to count the number of files/frames/images to import.

3.Export Secret API keys working consistently.  I think I need to refresh one of them soon somehow.

4.Posting Currently testing having Windows Task Scheduler run the script, through a batch file.
5.Archive Starting tomorrow all frames will hopefully be saved as, dates  yyyy-mm-dd.  As suggested above, and yes, should make sorting so much easier going forward.  I'll post the code after doing a renaming, and clean up effort tomorrow.

For the Top20 Days of Bitcoin thread, we've had some back and forth.  I get things working, and then can't leave well enough alone.  It's up to 9 currencies, 1 BTC  $ volume column, and 3 moving averages columns.  It's created by 3 table maker scripts, which call 12 seperate column making scripts.  4 of the 12 are somewhat custom but the other 8 are virtually identical in function, that they should be inputs to a function.  Soon.
1.Download  Now instead of downloading the whole 5 year data set each day, the script has added functionality of now only downloaded the latest data and appending the main list.

2.Import Figured out a new way to keep track of the newest date, when sorting by VWAP, and the list continues to grow in length each day. I put a counter in while processing all the files to look for the oldest date in the top 100.  

3.Export  Learned how to have one script call another script.

4.Posting  It's one click action, but I'm currently testing having Windows Task Scheduler run the script, through a batch file.

5.Archive  I'm trying not to hoard things this project, but the ever expanding price and volume history for BTC in each currency should be backed up regularly  

Here is the lead table maker script which calls the other 2 table making scripts, and handles almost all of the formatting, headers, and closers.
Code:
import pyautogui, pyperclip, shutil, time, webbrowser

# sets a sleep timer allowing for easier setting of faster runs during testing, and the data has been downloaded already
sleep_time = 15

# calls the scripts which create each column to be zipped together below
import Top20USD
time.sleep(sleep_time)
import Top20GBP
time.sleep(sleep_time)
import Top20EUR
time.sleep(sleep_time)
import Top20CAD
time.sleep(sleep_time)
import Top20JPY
time.sleep(sleep_time)

# location of the columns created above
usd = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_USD/top100usd.txt'
gbp = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_GBP/top100gbp.txt'
eur = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_EUR/top100eur.txt'
cad = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_CAD/top100cad.txt'
jpy = 'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_JPY/top100jpy.txt'

file_paths = [usd, gbp, eur, cad, jpy]
print("importingfiles")
with open('C:/PyProjects/Top20/Top20_Total_03_24/top20table.txt', 'w', encoding='utf-8') as output_file:

    # Read lines from all files simultaneously
    with open(file_paths[0], 'r', encoding='utf-8') as file1, \
            open(file_paths[1], 'r', encoding='utf-8') as file2, \
            open(file_paths[2], 'r', encoding='utf-8') as file3, \
            open(file_paths[3], 'r', encoding='utf-8') as file4, \
            open(file_paths[4], 'r', encoding='utf-8') as file5:
        
        for line1, line2, line3, line4, line5 in zip(file1, file2, file3, file4, file5):

            # Write combined lines to the output file
            output_line = f"| {line1.strip()}| {line2.strip()}| {line3.strip()}| {line4.strip()}| {line5.strip()}|\n"
            output_file.write(output_line)
# progress check
print("clipboarding")

# putting the trad5 part together, setting links as variables to limit code line length
with open('C:/PyProjects/Top20/Top20_Total_03_24/Top20table.txt', 'r') as top100:
    ul = 'https://bitcoincharts.com/charts/bitstampUSD'
    gl = 'https://data.bitcoinity.org/markets/volume/5y/GBP/kraken?r=day&t=b'
    el = 'https://data.bitcoinity.org/markets/volume/5y/EUR/kraken?r=day&t=b'
    cl = 'https://data.bitcoinity.org/markets/volume/5y/CAD/kraken?r=day&t=b'
    jl = 'https://data.bitcoinity.org/markets/volume/5y/JPY/kraken?r=day&t=b'
    list = top100.read()
    pre = f"[pre]   US Dollar              British Pound          EU Euro                Canadian Dollar        Japanese Yen"
    prelude = f"[size=9pt][b]| [url={ul}]Rank BitStamp   USD/BTC[/url]| [url={gl}]Rank Kraken     GBP/BTC[/url]| [url={el}]Rank Kraken     EUR/BTC[/url]| [url={cl}]Rank Kraken     CAD/BTC[/url]| [url={jl}]Rank Kraken        JPY/BTC[/url]|[/b]"
    explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391]|                                                    *  Chart  Legends  *                                                       |[/url]"
    JimboToronto = "|                                                          GoBTCGo™                                                             |[/pre]"
    full_post1 = f"{pre}\n{prelude}\n{list}{explanation}\n{JimboToronto}"

# on to the next set of tablesk, this script calls 4 seperate currency scripts
import tm_T2_03_21

# making the second table
with open('C:/PyProjects/Top20/Top20_Total_03_24/Top20tableT2.txt', 'r') as top100T2:
  
    il = 'https://data.bitcoinity.org/markets/volume/5y/IDR/bitcoin.co.id?r=day&t=b'
    kl = 'https://data.bitcoinity.org/markets/volume/5y/KRW/korbit?r=day&t=b'
    al = 'https://data.bitcoinity.org/markets/volume/5y/AUD/btcmarkets?r=day&t=b'
    ual = 'https://data.bitcoinity.org/markets/volume/5y/UAH/exmo?r=day&t=b'
    bv = 'https://data.bitcoinity.org/markets/volume/5y/USD/bitcoin.co.id?r=day&t=b' # this the highest BTC*USD price*volume
    listT2 = top100T2.read()
    pre = f"[pre]  Korean Wan               Australian Dollar    Ukrainian Hryvnia      Indonesian Rupiah          Bitstamp Exchange"
    prelude = f"[size=9pt][b]| [url=https://data.bitcoinity.org/markets/volume/5y/KRW/korbit?r=day&t=b]#  Korbit         KRW/BTC[/url]| [url=https://data.bitcoinity.org/markets/volume/5y/AUD/btcmarkets?r=day&t=b]#  btcmarkets AUD/BTC[/url]| [url=https://data.bitcoinity.org/markets/volume/5y/UAH/exmo?r=day&t=b]#  exmo         UAH/BTC[/url]| [url=https://data.bitcoinity.org/markets/volume/5y/IDR/bitcoin.co.id?r=day&t=b]#  bitcoin.co.id    IDR/BTC[/url]| [url=https://bitcoincharts.com/charts/bitstampUSD]#  BTC vol  in MUSD[/url]|[/b]"
    explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391]|                                                    *  Chart  Legends  *                                                    |[/url]"
    JimboToronto = "|                                                          GoBTCGo™                                                          |[/pre]"
    full_post2 = f"{pre}\n{prelude}\n{listT2}{explanation}\n{JimboToronto}"

# the final set of newest moving day average columns
import tm_T3_03_30

with open('C:/PyProjects/Top20/Top20_Total_03_24/Top20tableT3.txt', 'r') as top100T3:
    ul = 'https://bitcoincharts.com/charts/bitstampUSD'
    listT3 = top100T3.read()
    pre = f"[pre]Moving Average VWAP, the date listed is the final day of the given time span\n|  7 Day MA             30 Day MA           200 Day MA"
    prelude = f"[size=9pt][b]|  [url={ul}]#     7 DMA  USD/BTC[/url]|  [url={ul}]#     30 DMA USD/BTC[/url]| [url={ul}]#     200 DMA USD/BTC[/url]|[/b]"
    explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391]|                         *  Chart  Legends  *                       |[/url]"
    JimboToronto = "|                               GoBTCGo™                       |[/pre]"
    full_post3 = f"{pre}\n{prelude}\n{listT3}{explanation}\n{JimboToronto}"
    full_post = f"Daily [BTC] Volume Weighted Average Prices\n{full_post1}\n{full_post2}\n{full_post3}"
    
    pyperclip.copy(full_post)

    # can use this link for the reply page to top20 thread
    url = 'https://bitcointalk.org/index.php?action=post;topic=138109.0'
    webbrowser.open(url)
    time.sleep(5)
    pyautogui.hotkey('f11')
    time.sleep(5)
    pyautogui.hotkey('tab')
    time.sleep(2)
    pyautogui.hotkey('tab')
    time.sleep(2)
    pyautogui.hotkey('ctrl', 'v')
    time.sleep(2)
    pyautogui.hotkey('tab')
    time.sleep(2)
    # we're doing it live if the next command is #ed out
    pyautogui.hotkey('tab')
    time.sleep(5)
    pyautogui.hotkey('enter')
    time.sleep(5)
    pyautogui.hotkey('f11')
    
# if i don't delete the cache each time, the minor changes I make don't show up the next run, i think
shutil.rmtree('C:/PyProjects/Top20/Top20_Total_03_24/__pycache__')
Here is a sample of the code that should be a function.  This one gathers the newest data to append for Great Britain's Pound, sets the newest red item, the oldest green item, bolds items from the last 30 days, and makes sure VWAPs with different numbers of digits still line up.  Check out the outrageous use of commenting to choose your own adventure.   Cheesy
Code:
import csv, os, requests, time
from datetime import datetime, timedelta
import pandas as pd

# vwap_digits varialbe is used later to have, say, 100_000 and 99_999  be lined up in a column.
currency = "GBP"
exchange = "kraken"
next_vwap_digits = "6"  # not used yet, set manually for each currency.

# current_unix_time = int(time.time())
unix_time_month = 60 * 60 * 24 * 31
unix_time_day = 60 * 60 * 24
today = datetime.today()

###########################################################
# RUN this if the previous days are lost, otherwise just download the last 2 days, jump ahead
###########################################################

# url = f"https://data.bitcoinity.org/export_data.csv?currency={currency}&data_type=volume&exchange={exchange}&r=day&t=b×pan=5y&vu=curr"
# response = requests.get(url)
# print(response)
# # Check if the request was successful (status code 200)
# if response.status_code == 200:
#     # Assuming the response contains CSV data, you can save it to a file
#     with open(f"C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/{currency}_volume.csv", "w", newline="") as csvfile:
#         csvfile.write(response.text)
#     print("CSV file downloaded successfully!")
# else:
#     print(f"Error: {response.status_code} - Unable to download CSV data.")
# time.sleep(15)

# url2 = f"https://data.bitcoinity.org/export_data.csv?currency={currency}&data_type=volume&exchange={exchange}&r=day&t=b×pan=5y"
# response = requests.get(url2)
# print(response)
# # Check if the request was successful (status code 200)
# if response.status_code == 200:
#     # Assuming the response contains CSV data, you can save it to a file
#     with open(f"C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/BTC_volume.csv", "w", newline="") as csvfile:
#         csvfile.write(response.text)
#     print(f"CSV file downloaded successfully!")
# else:
#     print(f"Error: {response.status_code} - Unable to download CSV data.")


# ###########################################################
# # run this to only append the latest data              
# ###########################################################


filename = f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/{currency}_volume.csv'  
url = f"https://data.bitcoinity.org/export_data.csv?currency={currency}&data_type=volume&exchange={exchange}&r=day&t=b×pan=3d&vu=curr"
response = requests.get(url)
print(response)
# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Split the response text into lines
    lines = response.text.split('\n')
    print(lines)

    # Get the last line (excluding the potential trailing empty line)
    last_line = lines[-2] if lines[-1] == '' else lines[-1]
    print(last_line)              
    # Assuming the response contains CSV data, you can save it to a file
    with open(filename, "a", newline="") as csvfile:
        csvfile.write(last_line + '\n')
    print("CSV file downloaded successfully!")
else:
    print(f"Error: {response.status_code} - Unable to download CSV data.")
time.sleep(15)


filename = f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/BTC_volume.csv'
url2 = f"https://data.bitcoinity.org/export_data.csv?currency={currency}&data_type=volume&exchange={exchange}&r=day&t=b×pan=3d"
response = requests.get(url2)
print(response)
# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Split the response text into lines
    lines = response.text.split('\n')

    # Get the last line (excluding the potential trailing empty line)
    last_line = lines[-2] if lines[-1] == '' else lines[-1]
    print(last_line)  
    # Assuming the response contains CSV data, you can save it to a file
    with open(filename, "a", newline="") as csvfile:
        csvfile.write(last_line + '\n')
    print(f"CSV file downloaded successfully!")
else:
    print(f"Error: {response.status_code} - Unable to download CSV data.")


###########################################################
# end of commenting choice                                
###########################################################


# Read the CSV files
datestamp = pd.read_csv(f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/{currency}_volume.csv', usecols=[0])  # Assuming timestamp is in the first column
curr_df = pd.read_csv(f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/{currency}_volume.csv', usecols=[1])
btc_df = pd.read_csv(f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/BTC_volume.csv', usecols=[1])

# Perform division
result_df = curr_df / btc_df

# Save the  combined DataFrame to a new CSV file
result_df.to_csv(f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/result_with_timestamp.csv', index=False)

# Extract the date part from the timestamp
result_df['Date'] = pd.to_datetime(datestamp.iloc[:, 0]).dt.date

# Save the combined DataFrame to a new CSV file
result_df.to_csv(f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/result_with_timestamp.csv', index=False)

print("Results with timestamp written to 'result_with_timestamp.csv'.")

filename = f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/result_with_timestamp.csv'
top = 100
red_rank = 0
# Read data from the CSV file for green finding later
rows = []
with open(filename, 'r') as file:
    reader = csv.reader(file)
    next(reader)  # Skip the header row

    # Fill empty cells in column [0] before sorting, luckily this only happened when prices were low.  Here is where the red_rank variable is counted.
    # the last item in the list will be the red item if it makes the top 100 daily vwap.
    previous_value = 10000
    for row in reader:
        vwapcurr = row[0] or previous_value  # Fill empty cell with previous value
        time = row[1]
        rows.append((time, vwapcurr))
        previous_value = vwapcurr  # Update previous value for the next iteration
        red_rank += 1

    # Sort the data by VWAPcurr
    tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)]
    sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)
    rank = 1
    red_rank -= 1
    print(f"red rank is {red_rank}")
    with open(f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/forgreen{currency}.txt', 'w') as top100:
        
        for time, vwapcurr, rows in sorted_tuples[:top]:
            formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d')
            print(time)
            date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d')
            vwapcurr_float = float(vwapcurr)
            formatted_output = f"{rank:2d}, {time}, {vwapcurr_float:.0f}"
            top100.write(formatted_output + '\n')
            rank += 1

    os.replace (f"C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/forgreen{currency}.txt", f"C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/forgreen{currency}.csv")  
    filename = f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/forgreen{currency}.csv'

    # Read data from the to find green rank
    rows = []
    with open(filename, 'r') as file:
        reader = csv.reader(file)
        for row in reader:
            rows.append(row)

    # Sort the data by date, top entry is oldest rank in top 100, which is the green row
    tuples = [(rank, timestamp, vwap) for rank, timestamp, vwap in rows]  
    sorted_tuples = sorted(tuples, key=lambda x: x[1])
    green_rank, _, _ = sorted_tuples[0]
    print(green_rank)


    # sort for rank, bolding, coloring
    filename = f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/result_with_timestamp.csv'
    top = 100

    # Read data from the CSV file
    rows = []
    with open(filename, 'r') as file:
        reader = csv.reader(file)
        next(reader)  # Skip the header row

        # Fill empty cells in column [0] before sorting
        previous_value = 1000
        for row in reader:
            vwapcurr = row[0] or previous_value  # Fill empty cell with previous value
            time = row[1]
            rows.append((time, vwapcurr))
            previous_value = vwapcurr  # Update previous value for the next iteration

    # Sort the data by VWAPcurr
    tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)]
    sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)
    rank = 1

    with open(f'C:/PyProjects/Top20/Top20_Total_03_24/VWAP_{currency}/top100{currency}.txt', 'w') as top100:
        
        for time, vwapcurr, rows in sorted_tuples[:top]:
            formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d')
            date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d')
            vwapcurr_float = float(vwapcurr)
            if rank <= 99:
                spacing = " "
            if rank == 100:
                spacing = ""
            if date_difference <= timedelta(days = 30):
                bolding = "[b]"
                unbolding = "[/b]"
            if date_difference >= timedelta(days = 31):
                bolding = ""
                unbolding = ""
            if rows == red_rank:
                redcoloring = "[color=red][u]"
                reduncoloring = "[/u][/color]"
            if rows != red_rank:
                redcoloring = ""
                reduncoloring = ""    
            if  rank < int(green_rank):
                greencoloring = ""
                greenuncoloring = ""
            if  rank > int(green_rank):
                greencoloring = ""
                greenuncoloring = ""
            if vwapcurr_float <= 99_999:
                    endspace = " "
            if vwapcurr_float >= 100_000:
                    endspace = ""
            if rank - int(green_rank) == 0:
                greencoloring = "[color=#0BC505][u]"
                greenuncoloring = "[/u][/color]"
            formatted_output = f"{redcoloring}{greencoloring}{bolding}{rank:2d}{spacing} {formatted_date}  {endspace}{vwapcurr_float:,.0f}{unbolding}{reduncoloring}{greenuncoloring}"
            top100.write(formatted_output + '\n')
            rank += 1
EDIT: Phrasing
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Welp, Buddy's on a bit of a break it seems.   Smiley  I ran the script and it went smoothly again, but didn't post any animated recap, because it grabbed images from yesterday like we previously talked about.

The Top 100 list is going nicely.  So basically there is a legacy source of JSON data with Bitstamp USD BTC data.  Then the other currency data is from bitcoinity and comes back as a csv.  It took a few tries, but I am now converting the JSON reponse into a csv file, which I had more luck with dealing with the data.  The method of finding the green link, the oldest entry in the top 100, by sorting the list twice now works with all currencies.   When I took the script for a spin today, the script identified an older entry on the list than I had been manually tracking.  So i went back and made some quiet corrections.  Shhhh.  Everything is good now.  Why did I say that, now everything will be broken.   Grin  

unoptimized code for dealing with the USD volume weighted average price.  The biggest change, besides converting the JSON to csv, is that instead of requesting all the days, I'm only requesting the newest data to append the list.  The big block of commented code is only run the first time.  Which I see now, could be a simple check to see if the file exists, and then call another full download script if it doesn't exist, and in the same indent else, append just today's data, if the file does exist.
Code:
import csv, os, requests, time
from datetime import datetime, timedelta

currency = "USD"
vwap_digits = "5"

current_unix_time = int(time.time())
unix_time_month = 60 * 60 * 24 * 31
unix_time_day = 60 * 60 * 24
today = datetime.today()

# # RUN this if the previous 1200 plus data days are lost, otherwise just download the last 2 days, jump ahead
# filename = f'C:/PyProjects/Top20/ VWAP_{currency}/result_with_timestamp.csv'
# url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r=1200&i=Daily"
# response = requests.get(url, verify=False)
# data = response.json()

# column1_data = [entry[7] for entry in data]
# column2_data = [datetime.utcfromtimestamp(entry[0]).strftime('%Y-%m-%d') for entry in data]


# # Create the CSV data rows
# rows = zip(column1_data, column2_data)
# filename = "C:/PyProjects/Top20/VWAP_USD/result_with_timestamp.csv"

# # Create or overwrite the CSV file in write mode for potential future data additions
# with open(filename, 'w', newline='') as csvfile:
#     csv_writer = csv.writer(csvfile)

#     # Write the CSV header (optional, based on your requirement)
#     csv_writer.writerow(['Column 1 Name', 'Column 2 Name'])

#     # Write the data rows
#     csv_writer.writerows(rows)
#     print(rows)

# print("Data successfully exported to top100usd.csv")


#  # comment this out if downloading the initial data set
filename = f'C:/PyProjects/Top20/ VWAP_{currency}/result_with_timestamp.csv'

# only requesting 2 days of data
url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r=2&i=Daily"
response = requests.get(url, verify=False)
data = response.json()

#  Extract the desired columns (modify these indices based on your JSON structure)
column1_data = [entry[7] for entry in data]
column2_data = [datetime.utcfromtimestamp(entry[0]).strftime('%Y-%m-%d %H:%M:%S UTC') for entry in data]

# Create the CSV data row
row = zip(column1_data, column2_data)

filename = f"C:/PyProjects/Top20/VWAP_{currency}/result_with_timestamp.csv"

# Open the CSV file in append mode for potential future data additions
with open(filename, 'a', newline='') as csvfile:
    csv_writer = csv.writer(csvfile)
    csv_writer.writerows(row)
    print(row)


filename = f'C:/PyProjects/Top20/VWAP_{currency}/result_with_timestamp.csv'
top = 100

# Read data from the CSV file for green finding later
rows = []
with open(filename, 'r') as file:
    reader = csv.reader(file)
    next(reader)  # Skip the header row

    # Fill empty cells in column [0] before sorting
    previous_value = 10000
    for row in reader:
        vwapcurr = row[0] or previous_value  # Fill empty cell with previous value
        time = row[1]
        rows.append((time, vwapcurr))
        previous_value = vwapcurr  # Update previous value for the next iteration

    # Sort the data by VWAPcurr
    tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)]
    sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)
    rank = 1

    with open(f'C:/PyProjects/Top20/VWAP_{currency}/forgreen{currency}.txt', 'w') as top100:
        
        for time, vwapcurr, rows in sorted_tuples[:top]:
            # formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d')
            print(time)
            # date_difference = today - time #datetime.strptime(formatted_date, '%Y-%m-%d')
            vwapcurr_float = float(vwapcurr)
            formatted_output = f"{rank:2d}, {time}, {vwapcurr_float:.0f}"
            top100.write(formatted_output + '\n')
            rank += 1

    os.replace (f"C:/PyProjects/Top20/VWAP_{currency}/forgreen{currency}.txt", f"C:/PyProjects/Top20/VWAP_{currency}/forgreen{currency}.csv")  
    filename = f'C:/PyProjects/Top20/VWAP_{currency}/forgreen{currency}.csv'

    # Read data from the to find green rank
    rows = []
    with open(filename, 'r') as file:
        reader = csv.reader(file)
        for row in reader:
            rows.append(row)

    # Sort the data by date
    tuples = [(rank, timestamp, vwap) for rank, timestamp, vwap in rows]  
    sorted_tuples = sorted(tuples, key=lambda x: x[1])
    green_rank, _, _ = sorted_tuples[0]
    print(green_rank)


    # sort for rank, bolding, coloring
    filename = f'C:/PyProjects/Top20/VWAP_{currency}/result_with_timestamp.csv'
    top = 100

    # Read data from the CSV file
    rows = []
    with open(filename, 'r') as file:
        reader = csv.reader(file)
        next(reader)  # Skip the header row

        # Fill empty cells in column [0] before sorting
        previous_value = 1000
        for row in reader:
            vwapcurr = row[0] or previous_value  # Fill empty cell with previous value
            time = row[1]
            rows.append((time, vwapcurr))
            previous_value = vwapcurr  # Update previous value for the next iteration

    # Sort the data by VWAPcurr
    tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)]
    sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)
    rank = 1

    with open(f'C:/PyProjects/Top20/VWAP_{currency}/top100{currency}.txt', 'w') as top100:
        
        for time, vwapcurr, rows in sorted_tuples[:top]:
            formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d')
            date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d')
            vwapcurr_float = float(vwapcurr)

            # lines up the columns
            if rank <= 99:
                spacing = " "
            if rank == 100:
                spacing = ""

             # bolds item within last 30 days
            if date_difference <= timedelta(days = 30):
                bolding = "[b]"
                unbolding = "[/b]"
            if date_difference >= timedelta(days = 31):
                bolding = ""
                unbolding = ""

            # the newest data to be ranked is always the last row
            if rows == 1200:
                redcoloring = "[color=red][u]"
                reduncoloring = "[/u][/color]"
            if rows != 1200:
                redcoloring = ""
                reduncoloring = ""    

            # green_rank is found by sorting by vwap, and then sorting those by date to get the oldest, can I use a != here?
            if  rank < int(green_rank):
                greencoloring = ""
                greenuncoloring = ""
            if  rank > int(green_rank):
                greencoloring = ""
                greenuncoloring = ""
             if rank - int(green_rank) == 0:
                greencoloring = "[color=green][u]"
                greenuncoloring = "[/u][/color]"

            # sets the end of the column right if there is a decimal gain within the currency column
            if vwapcurr_float <= (10*int(vwap_digits)) - 1:
                    endspace = " "
            if vwapcurr_float >= 10*int(vwap_digits):
                    endspace = ""
            
            formatted_output = f"{redcoloring}{greencoloring}{bolding}{rank:2d}{spacing} {formatted_date}  {vwapcurr_float:,.0f}{unbolding}{reduncoloring}{greenuncoloring}{endspace}"
            top100.write(formatted_output + '\n')
            rank += 1
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Well I'm just crushed my coding buddy Mango is not right next to me now, biting my hand, jealous of the mouse.  He passed away two days ago. Cry

But I know he would want me to go on, so we better do that.   Undecided

I almost wrote a function, I think.  Or at least I finally now, see the time when a function is useful.  When one wants to do the same thing to different things.  That's interesting, because that sounds like exactly why I started this journey to begin with.  I can't believe how 'anti-fuction' I found myself to be.  Gemini and copilot both kept annoyingly providing functions, which I would dutifuly strip out the part I wanted and hard code the variables, because everything I was doing was, single purpose, pass/fail type stuff.  
For the Top20 thread script, which downloads the latest daily currency volume and divides by btc volume in that currency, to get the daily volume weighted average price.  So here is my template that works with bitcoinity.com.  This took a while, but if one sets the variables at the beginning of the script, the rest runs itself, except for the part about changing the spacing when the number of digits changes.  That seems to be a function right?
What I was doing is copying and pasting the template and then renaming the variables, but that is what a funtion is for if I understand things correctly.  Also now, the pre template currency scripts need to be updated one at a time. OR one could just update the function and the inputs.  I should also probably be using tabulate, in some way, but it's been fun trying to get everything lined up, with spacing flags.  I am definitely happy to learn that in python one can use 1_000_000 to mean 1 million, for readability reasons.  That was from some video I can't find about '5 Python coding tricks' video.   But I should be able to make that a function that requires the currency and exchange inputs, and not have to change each currencies' script if needed.  
Provided they make the top 100, this baby bolds the latest 31 entries, colors red and underlines the newest entry, and colors green and underlines the oldest entry in the top 100.  I tried so many ways, and failed to do it in one sort, which I know is possible.  Theoretically, inputting the current oldest entry one time, then each day's run could look for the next date to be green when the current one drops off the list.  The information can be permanent if saved to file.  Did not figure that out yet.
Code:
import csv, os, requests, time
from datetime import datetime, timedelta
import pandas as pd

CURR = "JPY"
EXCH = "kraken"

# for later calculations
unix_time_month = 60 * 60 * 24 * 31
unix_time_day = 60 * 60 * 24
today = datetime.today()

# downloads the currency data
url = f"https://data.bitcoinity.org/export_data.csv?currency={CURR}&data_type=volume&exchange={EXCH}&r=day&t=b×pan=5y&vu=curr"
response = requests.get(url)
print(response)
# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Assuming the response contains CSV data, you can save it to a file
    with open(f"C:/PyProjects/Top20/VWAP_{CURR}/{CURR}_volume.csv", "w", newline="") as csvfile:
        csvfile.write(response.text)
    print("CSV file downloaded successfully!")
else:
    print(f"Error: {response.status_code} - Unable to download CSV data.")
time.sleep(5)

# downloads the btc data
url2 = f"https://data.bitcoinity.org/export_data.csv?currency={CURR}&data_type=volume&exchange={EXCH}&r=day&t=b×pan=5y"
response = requests.get(url2)
print(response)
# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Assuming the response contains CSV data, you can save it to a file
    with open(f"C:/PyProjects/Top20/VWAP_{CURR}/BTC_volume.csv", "w", newline="") as csvfile:
        csvfile.write(response.text)
    print(f"CSV file downloaded successfully!")
else:
    print(f"Error: {response.status_code} - Unable to download CSV data.")

# Read the CSV files
datestamp = pd.read_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/{CURR}_volume.csv', usecols=[0])  # Assuming timestamp is in the first column
curr_df = pd.read_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/{CURR}_volume.csv', usecols=[1])
btc_df = pd.read_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/BTC_volume.csv', usecols=[1])

# Perform division to get the daily volume weighted average price
result_df = curr_df / btc_df

# Save the  combined DataFrame to a new CSV file
result_df.to_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/result_with_timestamp.csv', index=False)

# Extract the date part from the timestamp
result_df['Date'] = pd.to_datetime(datestamp.iloc[:, 0]).dt.date

# Save the combined DataFrame to a new CSV file
result_df.to_csv(f'C:/PyProjects/Top20/VWAP_{CURR}/result_with_timestamp.csv', index=False)

print("Results with timestamp written to 'result_with_timestamp.csv'.")

filename = f'C:/PyProjects/Top20/VWAP_{CURR}/result_with_timestamp.csv'
top = 100

# Read data from the CSV file for green finding later
rows = []
with open(filename, 'r') as file:
    reader = csv.reader(file)
    next(reader)  # Skip the header row

    # Fill empty cells in column [0] before sorting, make sure this doesn't affect the top 100
    previous_value = None
    for row in reader:
        vwapcurr = row[0] or previous_value  # Fill empty cell with previous value
        time = row[1]
        rows.append((time, vwapcurr))
        previous_value = vwapcurr  # Update previous value for the next iteration

# First Sort the data for green entry knowing it is the oldest top 100
tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)]
sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)
rank = 1

with open(f'C:/PyProjects/Top20/VWAP_{CURR}/forgreen{CURR}.txt', 'w') as top100:
    
    for time, vwapcurr, rows in sorted_tuples[:top]:
        formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d')
        date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d')
        vwapcurr_float = float(vwapcurr)
        formatted_output = f"{rank:2d}, {formatted_date}, {vwapcurr_float:.0f}"
        top100.write(formatted_output + '\n')
        rank += 1

os.rename (f"C:/PyProjects/Top20/VWAP_{CURR}/forgreen{CURR}.txt", f"C:/PyProjects/Top20/VWAP_{CURR}/forgreen{CURR}.csv")  
filename = f'C:/PyProjects/Top20/VWAP_{CURR}/forgreen{CURR}.csv'

# Read data from the to find green rank
rows = []
with open(filename, 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        rows.append(row)

# Sort the data by date, finally get the green date
tuples = [(rank, timestamp, vwap) for rank, timestamp, vwap in rows]  
sorted_tuples = sorted(tuples, key=lambda x: x[1])
green_rank, _, _ = sorted_tuples[0]
print(green_rank)


# sort for rank, bolding, coloring
filename = f'C:/PyProjects/Top20/VWAP_{CURR}/result_with_timestamp.csv'
top = 100

# Read data from the CSV file
rows = []
with open(filename, 'r') as file:
    reader = csv.reader(file)
    next(reader)  # Skip the header row

    # Fill empty cells in column [0] before sorting
    previous_value = None
    for row in reader:
        vwapcurr = row[0] or previous_value  # Fill empty cell with previous value
        time = row[1]
        rows.append((time, vwapcurr))
        previous_value = vwapcurr  # Update previous value for the next iteration

# Sort the data by VWAP, and go through one by one setting flags for the BBC code
tuples = [(timestamp, vwap, i + 1) for i, (timestamp, vwap) in enumerate(rows)]
sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)
rank = 1

with open(f'C:/PyProjects/Top20/VWAP_{CURR}/top100{CURR}.txt', 'w') as top100:
    
    for time, vwapcurr, rows in sorted_tuples[:top]:
        formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d')
        date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d')
        vwapcurr_float = float(vwapcurr)
        if rank <= 99:
            spacing = " "
        if rank == 100:
            spacing = ""
        if date_difference <= timedelta(days = 30):
            bolding = "[b]"
            unbolding = "[/b]"
        if date_difference >= timedelta(days = 31):
            bolding = ""
            unbolding = ""
        if rows == 1827:
            redcoloring = "[color=red][u]"
            reduncoloring = "[/u][/color]"
        if rows != 1827:
            redcoloring = ""
            reduncoloring = ""    
        if  rank < int(green_rank):
            greencoloring = ""
            greenuncoloring = ""
        if  rank > int(green_rank):
            greencoloring = ""
            greenuncoloring = ""
        if vwapcurr_float <= 9_999_999:
                endspace = " |"
        if vwapcurr_float >= 10_000_000:
                endspace = "|"
        if rank - int(green_rank) == 0:
            greencoloring = "[color=green][u]"
            greenuncoloring = "[/u][/color]"
        formatted_output = f"{redcoloring}{greencoloring}{bolding}{rank:2d}{spacing} {formatted_date}  {vwapcurr_float:.0f}{unbolding}{reduncoloring}{greenuncoloring}{endspace}"
        top100.write(formatted_output + '\n')
        rank += 1[/cpde]

Five days in a row for our ChartBuddy script.  I'm about ready to make these posts happen automatically, with no clicks.  Onward.
Edit: Hopefully cleared up a few sentences. and fixed some # commenting in the code
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
All right, one in a row, Daily Recap smooth as silk.  Probably because I haven't started the second great date and file renaming project.  Smiley

On the Top 20 Days in Bitcoin thread, things are progressing nicely.  I learned how to load your own scripts within a script and then me an copilot came up with this, because I needed each currency's script to run that day's calculations, before combining into the top20table.  It's fun trying to get the question just right.  And again, I'm not writing very much of this from scratch.  I am pleased by the amount of times that has been more and more where I can see what copilot is suggesting is not going to work, or when trying to figure out a better prompt, I see what is going wrong on my own.  Good times.  Smiley
top20table_maker.py
Code:
import pyperclip, time

sleep_time = 2

import Top20USD
time.sleep(sleep_time)
import Top20GBP
time.sleep(sleep_time)
import Top20EUR
time.sleep(sleep_time)
import Top20CAD
time.sleep(sleep_time)
import Top20JPY
time.sleep(sleep_time)

usd = 'C:/PyProjects/Top20/VWAP_USD/top100usd.txt'
gbp = 'C:/PyProjects/Top20/VWAP_GBP/top100gbp.txt'
eur = 'C:/PyProjects/Top20/VWAP_EUR/top100eur.txt'
cad = 'C:/PyProjects/Top20/VWAP_CAD/top100cad.txt'
jpy = 'C:/PyProjects/Top20/VWAP_JPY/top100jpy.txt'

file_paths = [usd, gbp, eur, cad, jpy]

with open('C:/PyProjects/Top20/top20table.txt', 'w', encoding='utf-8') as output_file:
    # Read lines from all files simultaneously
    with open(file_paths[0], 'r', encoding='utf-8') as file1, \
            open(file_paths[1], 'r', encoding='utf-8') as file2, \
            open(file_paths[2], 'r', encoding='utf-8') as file3, \
            open(file_paths[3], 'r', encoding='utf-8') as file4, \
            open(file_paths[4], 'r', encoding='utf-8') as file5:
        for line1, line2, line3, line4, line5 in zip(file1, file2, file3, file4, file5):
            # Write combined lines to the output file
            output_line = f"{line1.strip()} | {line2.strip()} {line3.strip()} {line4.strip()} {line5.strip()}\n"
            output_file.write(output_line)

# putting the post on the clipboard
with open('C:/PyProjects/Top20/top20table.txt', 'r') as top100:
    list = top100.read()
    prelude = f"[pre][size=10pt][url=https://bitcoincharts.com/charts/bitstampUSD]|Rank BitStamp   USD/BTC[/url] |[url=https://data.bitcoinity.org/markets/volume/5y/GBP/kraken?r=day&t=b]Rank Kraken     GBP/BTC[/url] |[url=https://data.bitcoinity.org/markets/volume/5y/EUR/kraken?r=day&t=b]Rank Kraken     EUR/BTC[/url] |[url=https://data.bitcoinity.org/markets/volume/5y/CAD/kraken?r=day&t=b] Rank  Kraken    CAD/BTC[/url]|[url=https://data.bitcoinity.org/markets/volume/5y/JPY/kraken?r=day&t=b] Rank  Kraken    JPY/BTC  |[/url]"
    
explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt] * * Chart Explanation * * [/size][/url][/pre]"
JimboToronto = "                                                                                            GoBTCGo™"
full_post = f"{prelude}\n{list}{explanation}\n{JimboToronto}"
pyperclip.copy(full_post)
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
5.Archive All files and folders are now in the format dd-mm-yyyy, for easier auto sorting.
Suggestion: yyyy-mm-dd is much easier to sort.
Example:
Code:
2024_02_23_Fri_10.34h
2024_02_27_Tue_10.34h
2024_03_01_Fri_10.34h
2024_03_05_Tue_07.33h
2024_03_05_Tue_10.34h
2024_03_08_Fri_10.34h
2024_03_12_Tue_10.34h
Even ls shows everything in chronological order now.
You are right again.  Heyyy.  ls, I remember that stands for list stuff, correct?  Wink  EDIT: but seriously, I love linux, but have never known what ls is short for.  If it is list, why not just l?  Throw it on the pile of things I know I don't know, for now.

And the streak record stands at 3 in a row.  Crash and burn.  I couldn't find the right script, so then I thought I found it, but anyway, it didn't look in the correct folder, which I should now rename again. Smiley

Big news on the paid job front, I have a template code to work with bitcoinity in different currencies, and I have a script that combines the individual top 100 currency results into one 'table'.  Now I need something to run each currency script, and then run the table_maker, and I can hear you all yelling, "Make it a function!"  To that I would say, you should have seen what i wanted to do.  Grin

table_maker.py
Code:
import requests, pyautogui, pyperclip, time, webbrowser
from datetime import datetime, timezone

file_paths = ['C:/PyProjects/VWAP_USD/top100usd.txt', 'C:/PyProjects/VWAP_EUR/top100eur.txt', 'C:/PyProjects/VWAP_gbp/top100gbp.txt']

# copilot wrote this clean code
with open('C:/PyProjects/output-file.txt', 'w', encoding='utf-8') as output_file:
    # Read lines from all files simultaneously
    with open(file_paths[0], 'r', encoding='utf-8') as file1, \
            open(file_paths[1], 'r', encoding='utf-8') as file2, \
            open(file_paths[2], 'r', encoding='utf-8') as file3:
        for line1, line2, line3 in zip(file1, file2, file3):
            # Write combined lines to the output file
            output_line = f"{line1.strip()} | {line2.strip()} {line3.strip()}\n"
            output_file.write(output_line)


# me putting the post on the clipboard
with open('C:/PyProjects/output-file.txt', 'r') as top100:
    list = top100.read()
    prelude = "[pre][size=10pt][url=https://bitcoincharts.com/charts/bitstampUSD]| Rank  BitStamp   USD/BTC   [/url] [url=https://data.bitcoinity.org/markets/volume/5y/EUR/kraken?r=day&t=b]| Rank  Kraken     EUR/BTC   [/url][url=https://data.bitcoinity.org/markets/volume/5y/GBP/kraken?r=day&t=b]| Rank  Kraken   GBP/BTC|[/url]"
    explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt]     * * Chart Explanation * *[/size][/url][/pre]"
    full_post = f"{prelude}\n{list}{explanation}"
    pyperclip.copy(full_post)

Added edit, the bitcoinity template, typically my comments start lowercase, copilot's are uppercase
Code:
import csv, os, pyperclip, requests, tabulate
from datetime import datetime, timedelta
import pandas as pd

current_unix_time = int(time.time())
unix_time_month = 60 * 60 * 24 * 31
unix_time_day = 60 * 60 * 24
today = datetime.today()

url = "https://data.bitcoinity.org/export_data.csv?currency=GBP&data_type=volume&exchange=kraken&r=day&t=b×pan=5y&vu=curr"
response = requests.get(url)
print(response)
# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Assuming the response contains CSV data, you can save it to a file
    with open("C:/PyProjects/VWAP_GBP/GBP_volume.csv", "w", newline="") as csvfile:
        csvfile.write(response.text)
    print("CSV file downloaded successfully!")
else:
    print(f"Error: {response.status_code} - Unable to download CSV data.")

url2 = "https://data.bitcoinity.org/export_data.csv?currency=GBP&data_type=volume&exchange=kraken&r=day&t=b×pan=5y"
response = requests.get(url2)
print(response)
# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Assuming the response contains CSV data, you can save it to a file
    with open("C:/PyProjects/VWAP_GBP/BTC_volume.csv", "w", newline="") as csvfile:
        csvfile.write(response.text)
    print("CSV file downloaded successfully!")
else:
    print(f"Error: {response.status_code} - Unable to download CSV data.")

# Read the CSV files
datestamp = pd.read_csv('C:/PyProjects/VWAP_GBP/GBP_volume.csv', usecols=[0])  # Assuming timestamp is in the first column
eur_df = pd.read_csv('C:/PyProjects/VWAP_GBP/GBP_volume.csv', usecols=[1])
btc_df = pd.read_csv('C:/PyProjects/VWAP_GBP/BTC_volume.csv', usecols=[1])

# Perform division
result_df = eur_df / btc_df

# Save the combined DataFrame to a new CSV file
result_df.to_csv('C:/PyProjects/VWAP_GBP/result_with_timestamp.csv', index=False)

# Extract the date part from the timestamp
result_df['Date'] = pd.to_datetime(datestamp.iloc[:, 0]).dt.date

# Save the combined DataFrame to a new CSV file
result_df.to_csv('C:/PyProjects/VWAP_GBP/result_with_timestamp.csv', index=False)

print("Results with timestamp written to 'result_with_timestamp.csv'.")


filename = 'C:/PyProjects/VWAP_GBP/result_with_timestamp.csv'
top = 100
currency = "GBP"

# Read data from the CSV file
rows = []
with open(filename, 'r') as file:
    reader = csv.reader(file)
    next(reader)  # Skip the header row
    for row in reader:
        time, vwapgbp = row[1], row[0]
        rows.append((time, vwapgbp))
# sort the data by VWAPgbp
tuples = [(timestamp, vwap, i +1) for i, (timestamp, vwap) in enumerate(rows)]
sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)
with open('C:/PyProjects/VWAP_gbp/top100gbp.txt', 'w') as top100:
    rank = 1
    for time, vwapgbp, rows in sorted_tuples[:top]:
        formatted_date = datetime.strptime(time, '%Y-%m-%d').strftime('%Y-%m-%d')
        date_difference = today - datetime.strptime(formatted_date, '%Y-%m-%d')
        if rank <= 99:
                spacing = " "
        if rank == 100:
                spacing = ""
        if date_difference <= timedelta(days = 31):
            bolding = "[b]"
            unbolding = "[/b]"
        if date_difference >= timedelta(days = 32):
            bolding = ""
            unbolding = ""
        if rows == 1827:
             redcoloring = "[color=red][u]"
             reduncoloring = "[/u][/color]"
        if rows != 1827:
             redcoloring = ""
             reduncoloring = ""
                    
        vwapgbp_float = float(vwapgbp)
        vgbp = str(int(vwapgbp_float))  
        print(f"{spacing}{redcoloring}{bolding}{rank:2d}  {formatted_date}  {vgbp} {currency}{unbolding}{reduncoloring}|")
        formatted_output = f"{spacing}{redcoloring}{bolding}{rank:2d}  {formatted_date}  {vgbp} {currency}{unbolding}{reduncoloring}|"
        top100.write(formatted_output + '\n')
        rank += 1
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
5.Archive All files and folders are now in the format dd-mm-yyyy, for easier auto sorting.
Suggestion: yyyy-mm-dd is much easier to sort.
Example:
Code:
2024_02_23_Fri_10.34h
2024_02_27_Tue_10.34h
2024_03_01_Fri_10.34h
2024_03_05_Tue_07.33h
2024_03_05_Tue_10.34h
2024_03_08_Fri_10.34h
2024_03_12_Tue_10.34h
Even ls shows everything in chronological order now.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
OMGosh.  It finally worked twice in a row.  I've got the script set for pyautogui to hit preview not post, but one little '#' will change, everything.  Unless of course one small thing outside of my control changes and then...another crash and burn.  But you know what?  It's gonna be okay Smiley  
1.Download no change
2.Import  terminal updates about loading the next page and skipping unneeded links
3.Export I figured out a better way to present the big gif link with '''url={imgur_big_gif + "v"}'''
4.Posting Realized a way to control the posting envrionment for pyautogui, was to open chrome, and then go f11, fullscreen.  Soon to be posted on a timer maybe?
5.Archive All files and folders are now in the format dd-mm-yyyy, for easier auto sorting.
Code:
import csv, os, pyautogui, pyperclip, re, requests, shutil, time, urllib.request, webbrowser
from datetime import timedelta, date
from os import rename

startTime = time.perf_counter()

# set dates for links and new folder
today = date.today()
tomorrow = today + timedelta(1)

# name newfolder with date
directory = f"{today:%m}-{today:%d}"
parent_dir = "C:/Users/Games/CB/images/"


# get the final 20 gif layers in reverse order, starting with 24
number = 24
url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'
time.sleep(20)
response = requests.get(url4)

# turn response into textfile of the source code.
source_code = response.text

# read the source code, save it, and turn it into a string.  
textfile = open('C:/Users/Games/CB/Temp/CBSource.txt', 'a+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

# find matches using regex, and for every match download the image and number it.  resorted to asking copilot for help with my regex
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}\/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    dl_number = f"{number:02d}"
    print(number, link)
    urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(dl_number))
    number = number - 1
    time.sleep(2)
os.remove('C:/Users/Games/CB/Temp/CBSource.txt')
print("going on")

# get the first 4 images in reverse order, i copied my own code and changed the link.  Should have made a function and then fed it the links probably.
url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20'
time.sleep(20)
response5 = requests.get(url5)
source_code = response5.text
textfile5 = open('C:/Users/Games/CB/Temp/CBSource2.txt', 'a+')
textfile5.write(source_code)
textfile5.seek(0)
filetext2 = textfile5.read()
textfile5.close()

# find matches using regex, and for first 4 matches download the image and number it
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}\/\w{2}\/\w{2}\/\w{5}\.png', filetext2)
for link in matches:
    if number >= 1:
        dl_number = f"{number:02d}"
        print(number, link)
        urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(dl_number))
        number = number - 1
        time.sleep(2)
    if number <= 0:
        print("skipping link")
os.remove('C:/Users/Games/CB/Temp/CBSource2.txt')

# hot keys to open gimp and then the plugin that load layers, export, scale, export gifs, quit, agree to not save
time.sleep(5)
pyautogui.click(1, 1)
time.sleep(5)
pyautogui.hotkey('ctrl', 'alt', 'g')
time.sleep(40)
pyautogui.click(820, 446)
time.sleep(20)
pyautogui.hotkey('ctrl', 'alt', 'l')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('enter')
time.sleep(20)
pyautogui.hotkey('ctrl', 'q')
time.sleep(10)
pyautogui.hotkey('shift', 'tab')
time.sleep(5)
pyautogui.hotkey('enter')
time.sleep(60)

# uploading big gif and getting link to use later,
url = "https://api.imgur.com/3/image"
payload = {'name': f'b{today:%m}-{today:%d}-{today.year}'}
files=[('image',('gif.gif',open('C:/Users/Games/CB/Temp/gif.gif','rb'),'image/gif'))]
headers = {'Authorization': 'Bearer xXxXxXxXxXxXx'}
response = requests.post(url, headers=headers, data=payload, files=files)
data = response.json()
imgur_big_gif = data.get("data", {}).get("link")

# uploading talkimg gif and getting link to use later, cle
url = "https://talkimg.com/api/1/upload"
headers = {"X-API-Key": "uvwxXxXxXxXxXxXxyz"}
files = {"source": open("C:/Users/Games/CB/Temp/gif2.gif", "rb")}
payload = {"title": f'b{today:%m}-{today:%d}-{today.year}', "album_id": "UFbj"}
response = requests.post(url, headers=headers, data=payload, files=files)
data = response.json()
talkimg_gif = data["image"]["url"]

# add post to clipboard for btctalk
pyperclip.copy(f"ChartBuddy's 24 hour Wall Observation recap\n[url={imgur_big_gif + "v"}].[img]{talkimg_gif}[/img].[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")

# can use this link for the reply button
url7 = 'https://bitcointalk.org/index.php?action=post;topic=178336.0'
webbrowser.open(url7)
time.sleep(20)
pyautogui.hotkey('f11')
time.sleep(10)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('ctrl', 'v')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
# we're doing it live if the next command is #ed out
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('enter')

#runtime is calculated
stopTime = time.perf_counter()
runtime = {stopTime - startTime}

# save to csv file
f = open('C:/PyProjects/runtimes.csv', 'a', newline='')
writer = csv.writer(f)
writer.writerow(runtime)

time.sleep(20)

# prepare to store downloads
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)
src = "C:/Users/Games/CB/images"
dest = "C:/Users/Games/CB/images/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# only move numbered png files
for file in files:
    if file.endswith(").png"):
        shutil.move(file, dest)  

# gifs are stored
rename ("C:/Users/Games/CB/Temp/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today:%m}-{today:%d}.gif")
rename (f"C:/Users/Games/CB/Temp/gif2.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/{today:%m}-{today:%d}.gif")

So exciting on the Top100 vwap list.  I executed a successful post from my phone using Chrome Remote Desktop back to the ole home PC.  I think I've figured out a way to get the green coloring automatic with a second tuple sort, but don't have it ready yet.  We failed many ways, until I figured out that by asking copilot to use tuples to keep the date rank and vwap rank seperate, pre sorting.  Then I could later flag the most recent item on the list, which is red, underlined.   I'm really leaning on dooglous' code and copilot for this
Code:
import requests, pyautogui, pyperclip, time, webbrowser
from datetime import datetime, timezone

current_unix_time = int(time.time())
unix_time_month = 60 * 60 * 24 * 31
unix_time_day = 60 * 60 * 24

def fetch_bitcoin_data(days=1200, top=100, currency='USD'):
    url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r={days}&i=Daily"
    response = requests.get(url, verify=False)
    data = response.json()
    rank = 1
    rows = [(entry[0], entry[7]) for entry in data]
    rows = rows[:-1]
    tuples = [(timestamp, vwap, i +1) for i, (timestamp, vwap) in enumerate(rows)]
    sorted_tuples = sorted(tuples, key=lambda x: float(x[1]), reverse=True)

# opens file to store the top 100 vwaps
    with open('C:/PyProjects/VWAP_USD/top100usd.txt', 'w') as top100:

        # sorts the daily VWAP by highest average price, and ranks them
        for timestamp, vwap, rows in sorted_tuples[:top]:
            adjusted_timestamp = int(timestamp)
            utc_date = datetime.fromtimestamp(adjusted_timestamp, tz=timezone.utc).strftime('%Y-%m-%d')
            
            # this is to make the columns line up
            if rank <= 99:
                spacing = "  "
            if rank == 100:
                spacing = " "
          
            # this is to make top 100 vwaps within the last 31 days bold
            if timestamp >= current_unix_time - unix_time_month:
                bolding = "[b]"
                unbolding = "[/b]"
            if timestamp <= current_unix_time - (unix_time_month + 1):
                bolding = ""
                unbolding = ""

            # i noticed the most recent result, the red, underline one, if it makes the list was always the last
            # being reverse sorted, from a list of 1200, and never forget Python starts counting at zed :)
            if rows == 1199:
                redcoloring = "[red][u]"
                reduncoloring = "[/u][/red]"
            if rows != 1199:
                redcoloring = ""
                reduncoloring = ""
            print(f"{spacing}{redcoloring}{bolding}{rank:2d}  {utc_date}  {vwap:.0f} {currency}{unbolding}{reduncoloring}")
            formatted_output = f"{spacing}{redcoloring}{bolding}{rank:2d}  {utc_date}  {vwap:.0f} {currency}{unbolding}{reduncoloring}"
            top100.write(formatted_output + '\n')
            rank += 1

    # putting the post on the clipboard
    with open('C:/PyProjects/VWAP_USD/top100usd.txt', 'r') as top100:
        list = top100.read()
        prelude = "[pre][size=10pt][url=https://bitcoincharts.com/charts/bitstampUSD]Rank   BitStamp  USD/BTC[/url]"
        explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt]     * * Chart Explanation * *[/size][/url][/pre]"
        full_post = f"{prelude}\n{list}{explanation}"
        pyperclip.copy(full_post)

    # can use this link for the reply page to top20 thread
    url = 'https://bitcointalk.org/index.php?action=post;topic=138109.0'
    webbrowser.open(url)
    time.sleep(5)
    pyautogui.hotkey('f11')
    time.sleep(5)
    pyautogui.hotkey('tab')
    time.sleep(2)
    pyautogui.hotkey('tab')
    time.sleep(2)
    pyautogui.hotkey('ctrl', 'v')
    time.sleep(2)
    pyautogui.hotkey('tab')
    time.sleep(2)
    # we're doing it live if the next command is #ed out
    # pyautogui.hotkey('tab')
    time.sleep(20)
    pyautogui.hotkey('enter')
    # open("top100.txt", 'w').close()
              
if __name__ == "__main__":
    fetch_bitcoin_data()

EDIT: Forgot I hadn't posted in so long, and forgot the previous update was pre auto red underlining, pre f11ing
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
But of course, just change the regex to only include today's date?
You'll need some from yesterday too. I usually convert the "Today" on the forum to a real date first, then get everything from the last 24 hours.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
One thing to fix is it currently downloads the last 24 images ChartBuddy posted, not necessarily only the posts from the last 24 hours.  I think I can figure out a way to request from Ninjastic Space, the number of posts in the last day, then fix the script to only download that many images for the day.
Why don't you use the time stamps on ChartBuddy's post history (and the second page)?
But of course, just change the regex to only include today's date?  Maybe..with error handling for the second page?  I'll see what I can do.  Thanks again.  Smiley
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
One thing to fix is it currently downloads the last 24 images ChartBuddy posted, not necessarily only the posts from the last 24 hours.  I think I can figure out a way to request from Ninjastic Space, the number of posts in the last day, then fix the script to only download that many images for the day.
Why don't you use the time stamps on ChartBuddy's post history (and the second page)?
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
ChartBuddy Daily Recap Storylog:
1.Download This is one of the last things I really figured out, and it's been the least error prone parts of the whole shebang. One thing to fix is it currently downloads the last 24 images ChartBuddy posted, not necessarily only the posts from the last 24 hours.  I think I can figure out a way to request from Ninjastic Space, the number of posts in the last day, then fix the script to only download that many images for the day.
2.Import My main goal now, is to get rid of pyautogui, and figure out how to run GIMP and post comments without having to worry about is the window maximized?, is it on the correct monitor?...  
3.Export Big progress has been made on the talkimg and imgur front.  I have now been given a talkimg.com account, so I can use the API.  It took me awhile to figure out the payload bit, but I eventually figured out how to I wonder how many people are racing through the code, right now, to see if i was foolish enough to post my secret API key on a public forum again.  Nope, not today, and I hope not in the future.  I was getting a bit down, because it was like, I would try something with imgur and it would work, and then try it again and things are different.  So if anyone was messing with me, thank you for not really doing any damage, but who am I kidding?  I'm sure it was just me.  Embarrassed  I remember back in the day where people would unknowingly expose their btc private key on the news or something, and zip, there go the cornz.  Shocked
4.Posting See above.
5.Archive Starting to date everything with 2 digit days and months for easier sorting.

Full03_08CB.py
Code:
import csv, json, os, pyautogui, pyperclip, re, requests, shutil, time, urllib.request, webbrowser
from datetime import timedelta, date
from os import rename

# on your marks, get set, go!
startTime = time.perf_counter()

# set 2 digit dates for links and new folder
today = date.today()
tomorrow = today + timedelta(1)

# get the final 20 gif layers in reverse order, starting with 24
number = 24
url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'
time.sleep(30)
response = requests.get(url4)

# turn response into textfile of the source code.
source_code = response.text

# read the source code, save it, and turn it into a string.  
textfile = open('C:/Users/Games/CBSource.txt', 'a+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

# find matches using regex, and for every match download the image and number it.  resorted to asking copilot for help with my regex
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    print(number, link)
    urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number))
    number = number - 1
    time.sleep(5)
os.remove('C:/Users/Games/CBSource.txt')

# get the first 4 images in reverse order, i copied my own code and changed the link.  Should have made a function and then fed it the links probably.
url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20'
time.sleep(30)
response5 = requests.get(url5)
source_code = response5.text
textfile5 = open('C:/Users/Games/CBSource2.txt', 'a+')
textfile5.write(source_code)
textfile5.seek(0)
filetext = textfile5.read()
textfile5.close()

# find matches using regex, and for first 4 matches download the image and number it
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    if number >= 1:
        urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number))
        print(number, link)
        number = number - 1
        time.sleep(5)
os.remove('C:/Users/Games/CBSource2.txt')

# hot keys to open gimp and then the plugin that load layers, export, scale, export gifs, quit, agree to not save
time.sleep(5)
pyautogui.click(1, 1)
time.sleep(5)
pyautogui.hotkey('ctrl', 'alt', 'g')
time.sleep(40)
pyautogui.click(820, 446)
time.sleep(20)
pyautogui.hotkey('ctrl', 'alt', 'l')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(10)
pyautogui.hotkey('ctrl', 'q')
time.sleep(5)
pyautogui.hotkey('shift', 'tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(20)

# uploading big gif and getting link to use later,
url = "https://api.imgur.com/3/image"
payload = {'name': f'b{today.month:02d}-{today.day:02d}-{today.year}'}
files=[('image',('gif.gif',open('C:/Users/Games/gif.gif','rb'),'image/gif'))]
headers = {'Authorization': 'Bearer **********************************'}
response = requests.post(url, headers=headers, data=payload, files=files)
data = response.json()
imgur_big_gif = data.get("data", {}).get("link")

# uploading talkimg gif and getting link to use later,
url = "https://talkimg.com/api/1/upload"
headers = {"X-API-Key": "chv_e*************************************************************"}
files = {"source": open("C:/Users/Games/gif2.gif", "rb")}
payload = {"title": f'b{today.month:02d}-{today.day:02d}-{today.year}', "album_id": "UFbj"}
response = requests.post(url, headers=headers, data=payload, files=files)
data = response.json()
talkimg_gif = data["image"]["url"]

# add post to clipboard for btctalk
pyperclip.copy(f"ChartBuddy's 24 hour Wall Observation recap\n[url={imgur_big_gif}].[img]{talkimg_gif}[/img].[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")

# can use this link for the reply button
url7 = 'https://bitcointalk.org/index.php?action=post;topic=178336.0'
webbrowser.open(url7)
time.sleep(20)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('ctrl', 'v')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
# we're doing it live if the next command is #ed out
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('enter')

# name newfolder with date
directory = f"{today.month:02d}-{today.day:02d}"
parent_dir = "C:/Users/Games/CB/images/"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)

# prepare to store downloads
src = "C:/Users/Games/CB/images"
dest = "C:/Users/Games/CB/images/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# only move numbered png files
for file in files:
    if file.endswith(").png"):
        shutil.move(file, dest)  

# gifs are stored:  NEED NEW MONTHLY FOLDER CREATION CODE
rename ("C:/Users/Games/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month:02d}-{today.year}/b{today.month:02d}-{today.day:02d}.gif")
rename (f"C:/Users/Games/gif2.gif", f"C:/Users/Games/CB/{today.year}/{today.month:02d}-{today.year}/{today.month:02d}-{today.day:02d}.gif")

#runtime is calculated
stopTime = time.perf_counter()
runtime = {stopTime - startTime}

# save to csv file
f = open('C:/PyProjects/runtimes.csv', 'a', newline='')
writer = csv.writer(f)
writer.writerow(runtime)

And then we have this new job posting the top 100 days of the volume weighted average price of BTC thread ne Top 20 days for Bitcoin in the Speculation board.
Storylog:  The main challenge with this code, was being able to implement it from the road, which is where I'm usually located at UTC midnight.  So I have worked out how to put the top 100 vwaps, with bolding for the vwaps within the last 31 days on the clipboard.  So all this week I have left my computer on (but not the monitors of course), and used my phone with chrome desktop remote to run the script.  The chrome remote also has the share clipboard feature, so then I can paste the list using my phone, which is so much easier than trying to control the home PC.  Then I can paste into bitcointalk, change the colors for latest and oldest top 100 vwap and hit post.  I haven't yet figured out how to script the colors.  I plan to work on that tomorrow.

Top100
Code:
import json, requests, pyautogui, pyperclip, time, webbrowser
from datetime import datetime, timezone

current_unix_time = int(time.time())
unix_time_month = 60 * 60 * 24 * 31
unix_time_day = 60 * 60 * 24

def fetch_bitcoin_data(days=1200, top=100, currency='USD'):
    url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r={days}&i=Daily"
    response = requests.get(url, verify=False)
    data = response.json()
    number = 1
    
    rows = [(entry[0], entry[7]) for entry in data]
    rows = rows[:-1]
    sorted_rows = sorted(rows, key=lambda x: float(x[1]), reverse=True)

# opens file to store the top 100 vwaps
    with open('top100test.txt', 'w') as top100:

        # sorts the daily VWAP by highest average price, and numbers them
        for timestamp, vwap in sorted_rows[:top]:
            adjusted_timestamp = int(timestamp)
            utc_date = datetime.fromtimestamp(adjusted_timestamp, tz=timezone.utc).strftime('%Y-%m-%d')
            
            # this is to make the columns look pretty
            if number <= 99:
                spacing = "  "
            if number == 100:
                spacing = " "
            # this is to make top 100 vwaps within the last 31 days bold
            if timestamp >= current_unix_time - unix_time_month:
                bolding = "[b]"
                unbolding = "[/b]"
            if timestamp <= current_unix_time - (unix_time_month + 1):
                bolding = ""
                unbolding = ""
            formatted_output = f"{spacing}{bolding}{number:2d}  {utc_date}  {vwap:.0f} {currency}{unbolding}"
            top100.write(formatted_output + '\n')
            # this gives them the rank number
            number += 1

    # putting the post on the clipboard
    with open('C:/Users/Games/top100Test.txt', 'r') as top100:
        list = top100.read()
        prelude = "[pre][size=10pt][url=https://bitcoincharts.com/charts/bitstampUSD]Rank   BitStamp  USD/BTC[/url]"
        explanation = "[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt]     * * Chart Explanation * *[/size][/url][/pre]"
        full_post = f"{prelude}\n{list}{explanation}"
        pyperclip.copy(full_post)

Because of couse it did.   Cheesy  The current folder name is 3-2024
Code:
 File "c:\PyProjects\Full3_8CB.py", line 140, in
    rename ("C:/Users/Games/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month:02d}-{today.year}/b{today.month:02d}-{today.day:02d}.gif")
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:/Users/Games/gif.gif' -> 'C:/Users/Games/CB/2024/03-2024/b03-08.gif'
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
It's amazing how many things can go wrong, right?  I opened up GIMP, admittedly to prepare to the daily, one click, do push ups special.  Because sometimes, the first time opening GIMP, takes a lot longer than subsequent times per restart probably.  This time though, upon CTRL-ALT-Ging my way to opening GIMP there was an update being proffered.  Well hell i thought, yet another thing that would have poked through my travesty of a tapestry of code. I remember wondering if the keyboard shortcut to open would carry over, but quickly stored that thought.  

So...GIMP never opened, gifs were never made to upload, and fail.  But, I only had to change the properties of the GIMP desktop shortcut, # out the download files part, remember to delete the empty current date folder before rerunning so I don't get the folder already exists error again, because I apparently refuse to deal with error cases yet.

But I did come up with some useful code for the Top20 job.  It fixes the weird justification of the number 100, and bolds all the top 100 daily volume weighted average prices.  I have not yet figured out how to automatically highlight the most recent top 100 vwap.  
Current Top20 code:
Code:
import requests, json, time
from datetime import datetime, timezone
# python Top20Current.py > vwap_ordered_list.txt

# setting time variables
current_unix_time = int(time.time())
unix_time_month = 60 * 60 * 24 * 31
unix_time_day = 60 * 60 * 24

#grabs json data, and sorts it by descending vwap
def fetch_bitcoin_data(days=1200, top=100, currency='USD'):
    url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r={days}&i=Daily"
    response = requests.get(url, verify=False)
    data = response.json()

#used for setting the rank, should be called rank not number.  todo, change number to rank
    number = 1

# we only want the date and vwap items from the full son return
    rows = [(entry[0], entry[7]) for entry in data]
    rows = rows[:-1]
    sorted_rows = sorted(rows, key=lambda x: float(x[1]), reverse=True)

# building the post in terminal that i need to figure out how to add it a file
    print("[pre][size=10pt]")
    print("[url=https://bitcoincharts.com/charts/bitstampUSD]Rank   BitStamp  USD/BTC[/url]")              
    for timestamp, vwap in sorted_rows[:top]:
        adjusted_timestamp = int(timestamp)
        utc_date = datetime.fromtimestamp(adjusted_timestamp, tz=timezone.utc).strftime('%Y-%m-%d')
        if number <= 99:
            spacing = "  "
        if number == 100:
            spacing = " "
        if timestamp >= current_unix_time - unix_time_month:
            bolding = "[b]"
            unbolding = "[/b]"
        if timestamp <= current_unix_time - (unix_time_month + 1):
            bolding = ""
            unbolding = ""
        print(f"{spacing}{bolding}{number:2d}  {utc_date}  {vwap:.0f} {currency}{unbolding}")
        number += 1
    print("[url=https://bitcointalk.org/index.php?topic=138109.msg54917391#msg54917391][size=8pt]     * * Chart Explanation * *[/size][/url]")
    print("[/size][/pre]")
              
if __name__ == "__main__":
    fetch_bitcoin_data()

EDIT: changed file to folder because that is what i meant, then changed edit to error because the same.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Ok thank you for that advice.  Maybe you've got some more.  Smiley  Apparently I've taken on another posting job in the Top 20 days for Bitcoin thread, while still retaining my amateur status.   Grin   Luckily the code that was being used was available, but it is a bash script.  I spent yesterday fumbling about with Linux and WSL, and then a virtual box Ubuntu install, trying to get it to work.  Copilot walked me through the steps as the errors rolled in, have to be in the same directory, set the environment, set permissions, set it to execute.  But in the end it would run, and give no results.  Just onto the next prompt.  I do really like Python though and after an initial translation by copilot, that didn't work, we hacked out a partial solution.  The Visual Studio integration with WSL does seem pretty useful, like ImageMagick, and warrants further study.

Here is the base bash script from user dooglus, that i believe yefi used and might have modified, to include underlining for example.
Code:
vwap() {
    days=1200
    top=20
    currency=USD
    rows=$(wget -o/dev/null -O- "http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r=$days&i=Daily" |
                  sed 's/], \[/\n/g'   |
                  head -n $((days-1))  |
                  tr -d '[],'          |
                  awk '{print $1, $8}' |
                  sort -k2nr           |
                  head -$top
        )
    newest=$(echo "$rows" | sort -n | tail -1 | awk '{print $1}')
    printf "Update:\n[pre]\n"
    n=1
    month_ago=$(($(date +%s) - 60*60*24*32))
    echo "$rows" |
        while read t p
        do
            if ((t > month_ago)); then b1="[b]"            ; b2="[/b]"    ; else b1=""; b2=""; fi
            if ((t == newest))   ; then c1="[color=#7F0000]"; c2="[/color]"; else c1=""; c2=""; fi
            printf "%s%s%2d  %s  %7.2f $currency%s%s\n" "$b1" "$c1" $n "$(TZ= date -d @$t +%Y-%m-%d)" $p "$c2" "$b2"
            ((n++))
        done
    printf "[/pre]\n"
}

And here is the current Python code I'm using.  It's got a function in it, so you know I had help.   Cheesy  Again, I can read it, but not write it.  I did know what to change to get the price to the nearest dollar, and not penny.  Smiley
Code:
import requests
from datetime import datetime

#getting the last 1200 days of btc volume weighted average price
def fetch_bitcoin_data(days=1200, top=100, currency='USD'):
    url = f"http://bitcoincharts.com/charts/chart.json?m=bitstampUSD&r={days}&i=Daily"
    response = requests.get(url, verify=False)
    data = response.json()
    number = 1
    rows = [(entry[0], entry[7]) for entry in data]
    
    # Exclude the most recent entry (today's data)
    rows = rows[:-1]
    
    sorted_rows = sorted(rows, key=lambda x: x[1], reverse=True)
    
    for timestamp, vwap in sorted_rows[:top]:
        adjusted_timestamp = int(timestamp)
        utc_date = datetime.utcfromtimestamp(adjusted_timestamp).strftime('%Y-%m-%d')
        print(f"{number:2d}  {utc_date}  {vwap:.0f} {currency}")
        number += 1
        if number > top:
            break

if __name__ == "__main__":
    fetch_bitcoin_data()
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
I need to look into what you meant by numerical sort, in terms of possible commands.
In sort, it's this:
Code:
       -n, --numeric-sort
              compare according to string numerical value
If you're going to move your code to Linux anyway, maybe it helps.

Quote
I'm really surprised the datetime module doesn't return double digit hours, days, months, all that stuff.
Isn't that an option you can toggle?
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Okay.  Nobody told me this bloody machine can't even count.  You tell it to load numbered files in order and it goes, 1, 10, 11-19, 2, 20, 21...  Grin
Lol. Been there, done that Smiley It's not counting, it's sorting. Easy fix: use leading zeros, or numerical sort.
I need to look into what you meant by numerical sort, in terms of possible commands.  But yeah, i guess it alphabetically sorts the list, not numerically.  But when i add the leading zeros it breaks all my plugins.  Smiley  Thus is the way of progress.  I'm really surprised the datetime module doesn't return double digit hours, days, months, all that stuff.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Okay.  Nobody told me this bloody machine can't even count.  You tell it to load numbered files in order and it goes, 1, 10, 11-19, 2, 20, 21...  Grin
Lol. Been there, done that Smiley It's not counting, it's sorting. Easy fix: use leading zeros, or numerical sort.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Okay.  Nobody told me this bloody machine can't even count.  You tell it to load numbered files in order and it goes, 1, 10, 11-19, 2, 20, 21...  Grin Luckily I could see something was amiss.
Here's the code i finally squeezed out.  I did have to do a quick Brave search to recall the method of stating a range, and getting the length of a list.  Also some hard coding of dates that need to be fixed.  I figured the easiest way i knew to get all these images into GIMP in the right order would be to number them as they were placed in a single folder.  674 this previous month.  Then my existing GIMP gif making plugin could easily be modified, but I think i actually just ctrl-a, selected them all and then just dragged it into GIMP with the title page already loaded, use the reverse the layers plugin, export, and Bob's your uncle.
Code:
import os, shutil, time
from os import rename

#making a backup
shutil.copytree('C:/Users/Games/CB/CBuddyDaily', 'C:/Users/Games/Backup')

# set dates and variables for folders and files
# today = date.today()
# tomorrow = today + timedelta(1)
destination = "C:/Users/Games/CB/2024/2-2024/Monthly"
hour_number = 1
day_number= 1

# for 29 days this year
for i in range(1, 30):
    file_number = 1
    src = f"C:/Users/Games/CB/CBuddyDaily/2-{day_number:02d}"
    files = os.listdir(src)
    CB_daily_post_total = len(files) + 1
    os.chdir(src)
    time.sleep(1)

    for m in range(1, CB_daily_post_total):
        rename (f'C:/Users/Games/CB/CBuddyDaily/2-{day_number:02d}/download ({m}).png', f"C:/Users/Games/CB/2024/2-2024/Monthly/download ({hour_number}).png")
        print(hour_number, file_number, m)
        hour_number += 1
        file_number += 1
    day_number += 1
    print(day_number)
There might be some good error checking code in there.   If i know how many posts ChartBuddy made that day before starting the whole process, that would be helpful.

Still waiting for that perfect run, but things really went smoothly this run, only because there were exactly 24 images to download.  Gonna work on that.
1.Download Left click
2.Import  Do pushups.  Y'all hear about the 100 pushups a day till 100k btc challenge?  https://bitcointalksearch.org/topic/--5484350
3.Export  
4.Posting  I have it set to skip the Post button and tab one more time to the Preview button, for now.  This time I had to add the monthly recap to the post.
5.Archive  Here, it finally errored out.  On the penultimate command, because I didn't have a new 3-2024 folder to store the gifs in.

2_29CB.py errr 3_1CB.py EDIT: I'm so distressed i couldn't post the monthly, I don't even know what day it is.  Just like my code sometimes.  Cry
Code:
import csv, os, pyautogui, pyperclip, re, requests, shutil, time, urllib.request, webbrowser
from datetime import timedelta, date
from os import rename

startTime = time.perf_counter()

# set dates for links and new folder
today = date.today()
tomorrow = today + timedelta(1)

# name newfolder with date
directory = f"{today.month}-{today.day}"
parent_dir = "C:/Users/Games/CB/images/"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)

# get the final 20 gif layers in reverse order, starting with 24
number = 24
url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'
response = requests.get(url4)

# turn response into textfile of the source code.
source_code = response.text

# read the source code, save it, and turn it into a string.  
textfile = open('C:/Users/Games/CBSource.txt', 'a+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

# find matches using regex, and for every match download the image and number it.  resorted to asking copilot for help with my regex
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    print(number, link)
    urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number))
    number = number - 1
    time.sleep(5)
os.remove('C:/Users/Games/CBSource.txt')

# get the first 4 images in reverse order, i copied my own code and changed the link.  Should have made a function and then fed it the links probably.
url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20'
response5 = requests.get(url5)
source_code = response5.text
textfile5 = open('C:/Users/Games/CBSource2.txt', 'a+')
textfile5.write(source_code)
textfile5.seek(0)
filetext = textfile5.read()
textfile5.close()

# find matches using regex, and for first 4 matches download the image and number it
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    if number >= 1:
        urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number))
        print(number, link)
        number = number - 1
        time.sleep(5)
os.remove('C:/Users/Games/CBSource2.txt')

# hot keys to open gimp and then the plugin that load layers, export, scale, export gifs, quit, agree to not save
time.sleep(5)
pyautogui.click(1, 1)
time.sleep(5)
pyautogui.hotkey('ctrl', 'alt', 'g')
time.sleep(20)
pyautogui.click(820, 446)
time.sleep(20)
pyautogui.hotkey('ctrl', 'alt', 'l')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(10)
pyautogui.hotkey('ctrl', 'q')
time.sleep(5)
pyautogui.hotkey('shift', 'tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(10)

# uploading big gif and getting link to use later,
url = "https://api.imgur.com/3/image"
payload = {'name': f'b{today.month}-{today.day}'}
files=[('image',('C:/Users/Games/gif.gif',open('C:/Users/Games/gif.gif','rb'),'image/gif'))]
headers = {'Authorization': 'Bearer f0e27b94e6f8ead1480763e666c8587b73365850'}
response = requests.request("POST", url, headers=headers, data=payload, files=files)

# looking for the link
imgur_return = response.text
linkfile = open('C:/Users/Games/imgurlink.txt', 'a+')
linkfile.write(imgur_return)
linkfile.seek(0)
filetext = linkfile.read()
linkfile.close()
imgurlink = re.findall(r'https:\/\/i\.imgur\.com\/.*\.gif', filetext)

# and the following only works  i think because it's the only link in the JSON response
for imgur in imgurlink:
    imgur_big_gif = imgur
os.remove('C:/Users/Games/imgurlink.txt')

# open imgtalk to upload gif2
url3 = "https://www.talkimg.com/"
webbrowser.open(url3)
time.sleep(30)
pyautogui.click(953, 590)
time.sleep(5)
pyautogui.click(221, 479)
time.sleep(5)
pyautogui.typewrite("gif2.gif")
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(10)
pyautogui.hotkey('enter')
time.sleep(5)
pyautogui.click(949, 645)
time.sleep(5)
pyautogui.click(1276, 625)
time.sleep(5)
imgtalklink = pyperclip.paste()

# add post to clipboard for btctalk
pyperclip.copy(f"ChartBuddy's 24 hour Wall Observation recap\n[url={imgur_big_gif}].{imgtalklink}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")

# can use this link for the reply button
url7 = 'https://bitcointalk.org/index.php?action=post;topic=178336.0'
webbrowser.open(url7)
time.sleep(10)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('ctrl', 'v')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
# we're doing it live if the next command is #ed out
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('enter')

#runtime is calculated
stopTime = time.perf_counter()
runtime = {stopTime - startTime}

# save to csv file
f = open('C:/PyProjects/runtimes.csv', 'a', newline='')
writer = csv.writer(f)
writer.writerow(runtime)

time.sleep(20)

# prepare to store downloads
src = "C:/Users/Games/CB/images"
dest = "C:/Users/Games/CB/images/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# only move numbered png files
for file in files:
    if file.endswith(").png"):
        shutil.move(file, dest)  

# big gif is stored
rename ("C:/Users/Games/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today.month}-{today.day}.gif")

# little gif is stored
rename (f"C:/Users/Games/gif2.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/{today.month}-{today.day}.gif")

Next moves: Handling errors, exceptions, and elses.  What a save!

EDIT:  Runtime was 364.1 s of which, 220 s were sleep commands to make sure things weren't happening too fast.  Very nice.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process:
~
I'm going to install linux on a virtual machine, which i do have some very limited experience with, and see how things go there.
Linux will never tell you you can't move or delete a file because it's in use. It just does what you tell it to do. If you would delete a movie while it's playing, it keeps playing until the end anyway.

That would seem to solve that problem, because if i understand how things are happening, GIMP should be done with the file once GIMP is closed, but obviously not.  But here's a crazy thing.  Tonight the upload failed while the file move succeeded.  Huh  I did move the file move command to the very end of the script, but it had previously failed all day in testing with the same changes.   Huh  I need to pay closer attention to the finer details.  Oh.  Yeah.  My attempt at an auto monthly recap, resulted in... not that.  Going to try again tomorrow.  The code for the monthly recap is below, but honestly, of course I might have messed up my own code by backing up empty folders by accident. Whoops.
Whoopsie.
Code:
import os, shutil
from os import rename

#making a backup
shutil.copytree('C:/Users/Games/CB/CBuddyDaily', 'C:/Users/Games/Backup')

# set dates and variables for file numbering
# today = date.today()
# tomorrow = today + timedelta(1)
destination = "C:/Users/Games/CB/2024/2-2024/Monthly"
hour_number = 1
day_number= 1

for i in range(1, 30):
    src = f"C:/Users/Games/CB/CBuddyDaily/02-{day_number:02d}"
    files = os.listdir(src)
    os.chdir(src)
    print(src)
    for file in files:
        rename (file, f"C:/Users/Games/CB/2024/2-2024/Monthly/download ({hour_number}).png")
        hour_number += 1
        print(hour_number)
        print(day_number)
    day_number += 1
    print(day_number)

def play_game(Rocket_League)
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process:
~
I'm going to install linux on a virtual machine, which i do have some very limited experience with, and see how things go there.
Linux will never tell you you can't move or delete a file because it's in use. It just does what you tell it to do. If you would delete a movie while it's playing, it keeps playing until the end anyway.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Last time was another head scratcher.  I figured out the hard parts, downloading the images, making the gifs, composing and posting the post, but now one of the first things I was doing, moving files around, kept erroring out.

PermissionError: [WinError 32] The process cannot access the file because it is being used by another process:

I commented out all of the downloading below because that part of the code worked the first time, and all the file transfer parts that stopped working.  Just in time for the end of the month recap.  Brilliant.   Grin  That's what i'm having fun with now, I'm going try and figure it out just looking at snippets I've grabbed, and then I'm going to see what copilot says, without providing any of my code, only telling it what I want the script to do.  Oh, and it is probably my mistake somehow, but the copilot solution to pulling the link out of the API using JSON didn't work.  I think I imported everything required, json, and api from requests?  Can't fool around with that now, I'm sticking to what works.

Code:
import csv, os, pyautogui, pyperclip, re, requests, shutil, time, urllib.request, webbrowser
from datetime import timedelta, date
from os import rename

startTime = time.perf_counter()

# set dates for links and new folder
today = date.today()
tomorrow = today + timedelta(1)

# # get the final 20 gif layers in reverse order, starting with 24
# number = 24
# url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'
# response = requests.get(url4)

# # turn response into textfile of the source code.
# source_code = response.text

# # read the source code, save it, and turn it into a string.  
# textfile = open('C:/Users/Games/CBSource.txt', 'a+')
# textfile.write(source_code)
# textfile.seek(0)
# filetext = textfile.read()
# textfile.close()

# # find matches using regex, and for every match download the image and number it.  resorted to asking copilot for help with my regex
# matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
# for link in matches:
#     print(number, link)
#     urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number))
#     number = number - 1
#     time.sleep(5)
# os.remove('C:/Users/Games/CBSource.txt')

# # get the first 4 images in reverse order, i copied my own code and changed the link.  Should have made a function and then fed it the links probably.
# url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20'
# response5 = requests.get(url5)
# source_code = response5.text
# textfile5 = open('C:/Users/Games/CBSource2.txt', 'a+')
# textfile5.write(source_code)
# textfile5.seek(0)
# filetext = textfile5.read()
# textfile5.close()

# # find matches using regex, and for first 4 matches download the image and number it
# matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
# for link in matches:
#     if number >= 1:
#         urllib.request.urlretrieve(link, 'C:/Users/Games/CB/images/download ({}).png'.format(number))
#         print(number, link)
#         number = number - 1
#         time.sleep(5)
# os.remove('C:/Users/Games/CBSource2.txt')

# name newfolder with date
directory = f"{today.month}-{today.day}"
parent_dir = "C:/Users/Games/CB/images/"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)

# command for show desktop, and clicking an empty region on the proper monitor
time.sleep(5)
pyautogui.hotkey('win', 'd')
time.sleep(5)
pyautogui.click(1, 1)
time.sleep(5)

# hot keys to open gimp and then the plugin that load layers, export, scale, export gifs, quit, agree to not save
pyautogui.hotkey('ctrl', 'alt', 'g')
time.sleep(10)
pyautogui.click(820, 446)
time.sleep(5)
pyautogui.hotkey('ctrl', 'alt', 'l')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(10)
pyautogui.hotkey('ctrl', 'q')
time.sleep(5)
pyautogui.hotkey('shift', 'tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(5)
print('gif done')

# uploading big gif and getting link to use later,
url = "https://api.imgur.com/3/image"
payload = {'name': f'b{today.month}-{today.day}'}
files=[('image',('C:/Users/Games/Postman/files/gif.gif',open('C:/Users/Games/Postman/files/gif.gif','rb'),'image/gif'))]
headers = {'Authorization': 'Bearer f0e27b94e6f8ead1480763e666c8587b73365850'}
response = requests.request("POST", url, headers=headers, data=payload, files=files)

# looking for the link
imgur_return = response.text
linkfile = open('C:/Users/Games/imgurlink.txt', 'a+')
linkfile.write(imgur_return)
linkfile.seek(0)
filetext = linkfile.read()
linkfile.close()
imgurlink = re.findall(r'https:\/\/i\.imgur\.com\/.*\.gif', filetext)
# ibg = imgurlink
# print (ibg)

# if i don't do it the following way, the link comes out with ['brackets and quotes']
# that's probably because what i've been 're turned' is a list
# and the following only works because it's the only link in the JSON response
for imgur in imgurlink:
    imgur_big_gif = imgur
os.remove('C:/Users/Games/imgurlink.txt')

# big gif is stored, hmm cancelling all file movements, both these methods have worked.  I think i need to close the file or something
# PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:/Users/Games/Postman/files/gif.gif'
# src = "C:/Users/Games/Postman/files/gif.gif"
# dest = f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/"
# shutil.move('C:/Users/Games/Postman/files/gif.gif', dest)
# rename ("C:/Users/Games/CB/2024/2-2024/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today.month}-{today.day}.gif")

# OR
# look at me turning 4 lines of code into 1
# rename ("C:/Users/Games/Postman/files/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today.month}-{today.day}.gif")
# rename ("C:/Users/Games/Postman/files/gif.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/b{today.month}-{today.day}.gif")

# open imgtalk to upload gif2
url3 = "https://www.talkimg.com/"
webbrowser.open(url3)
time.sleep(10)
pyautogui.click(953, 590)
time.sleep(5)
pyautogui.click(221, 479)
time.sleep(5)
pyautogui.typewrite("gif2.gif")
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(10)
pyautogui.hotkey('enter')
time.sleep(5)
pyautogui.click(949, 645)
time.sleep(5)
pyautogui.click(1276, 625)
time.sleep(5)
imgtalklink = pyperclip.paste()

# little gif is stored
#  rename (f"C:/Users/Games/Postman/files/gif2.gif", f"C:/Users/Games/CB/{today.year}/{today.month}-{today.year}/{today.month}-{today.day}.gif")

# # prepare to store downloads
# src = "C:/Users/Games/CB/images"
# dest = "C:/Users/Games/CB/images/{}".format(directory)
# files = os.listdir(src)
# os.chdir(src)

# # only move numbered png files
# for file in files:
#     if file.endswith(").png"):
#         shutil.move(file, dest)  

# add post to clipboard for btctalk
pyperclip.copy(f"ChartBuddy's 24 hour Wall Observation recap\n[url={imgur_big_gif}].{imgtalklink}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")

# can use this link for the reply button
url7 = 'https://bitcointalk.org/index.php?action=post;topic=178336.0'
webbrowser.open(url7)
time.sleep(10)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('ctrl', 'v')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
# we're doing it live if the next command is #ed out
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('enter')

#runtime is calculated
stopTime = time.perf_counter()
runtime = {stopTime - startTime}

# save to csv file
f = open('C:/PyProjects/runtimes.csv', 'a', newline='')
writer = csv.writer(f)
writer.writerow(runtime)
Crash and burn again, with the permissions.  I thought changing the GIMP's exports from Postman's folder to my own folder would solve the problem.  Which it did, but apparently only one time use only.  I understand Buddy needs space, so I'm going to install linux on a virtual machine, which i do have some very limited experience with, and see how things go there.  See you on the other side.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Fingers of lead tonight mates.  Sad  I couldn't leave well enough alone.  Not that it worked completely last night, but here's a lesson that I know, but didn't implement.  Test after each change to one's code because if one changes 10 things, and it doesn't work, which thing is the problem?  I still can't figure out what happened with the imgur API last night, and how it uploaded an old gif.  But now I can't get it to work with python at all, I can upload using the Postman desktop app, but the python script it gives me kept erroring out tonight, even though I didn't touch that part of the code.  Last night something didn't work with imgur, but my code did, somehow, stumble the rest of the way to post the talkimg hosted gif into the bct forum.   
At least I've been saving daily versions of the script, so tonight I went back a few days and had a runtime of 342.5 s.   Embarrassed
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
One click! One click! One click!  No, not 3 clicks, I said One click!, but it's actually, much, much, more!  How's that you ask?  Well, in one click, sorry, in One click!, i got the post, with proper gif, posted to the WO thread.  But for reasons i can't figure out yet, the imgur.com clickable link came out using 2-23 data.  Huh  So i needed clicks to fix.  Must be something to do with testing, but the code i ran is below.  I also successfully failed, sailed,  in step 4, when the 'my way' multi step solution to a simple answer, went wrong, yet showed the way.  Went wrong in the sense that there are many parts to solving a problem, it's not just the solving part.  Effort should be put into identifying the variables, and how the variables affect the outcome.  If I had mapped out all the outcomes, I might have seen from step one, just applying my pyautogui commands to the static link i've hard coded, would've been much easier.  

1.Download
2.Import  
3.Export
I need to figure out what to do on days with less than 24 downloads.  Also I'm going to work on less file movement, just put it where is going and call it from there when needed.  Before that I should understand virtual environments better.  

4.Posting  
I'm looking forward to removing all the pyautogui, and using the command line.  But if I'm logged into BCT and imgur, everything should work with one click. Cheesy  I had a plan and i executed it, but it ain't pretty.  

I figured out my way to get the most recent page of the WO Observer thread by looking at the first page source of the thread, which is a static link, to find the link to the last page of the thread at the bottom of the page.  I had some fun with that, because of my ignorance of the finer differences between strings, and lists.  But with a little help from copilot we powered on.  I now see one could also find the link to the most current page number, on the reply page to that thread. ATTN: Code Error produced 1000 not the desired 661080 or i guess it should have been 661060, since this particular post is the first on a new page.  OR ya big doofus, that's me, just hit reply on the first page and let the forum code handle on how to post it as the most recent post.  Roll Eyes  I realized this, when my solution failed and sent me to some other page rather than the last one, but since pyautogui mindlessly went on to hit reply, everything worked out okay, for that part.  Phew, at least i was in the proper thread.  Smiley

5.Archive
I just learned that pyautogui can also click and drag, so it should be possible, for fun, to drag all the images from every day this month into GIMP.  Maybe set up another race... Smiley  It would be good practice for loops and functions maybe.  I'm seeing a function that auto double-clicks the first day's folder, at (x, y) then select all, ctrl + a.  This way, there is no need to know how many images are in each day to move.  Then click and drag into gimp, go back to previous folder and loop around with (x, y + 20) or wherever the next day's folder is.  Hmm.  

I would need to know how many folders there are, or I could change the name of the folder to include a variable number that goes up by one each time a new day is added, then just cut it out of the directory name to plug into the mouse moving function, at the end of the month.  

2-25 run code runtime = 313.0 s
Code:
import csv, os, pyautogui, pyperclip, re, requests, shutil, time, urllib.request, webbrowser
from datetime import timedelta, date
from os import rename

startTime = time.perf_counter()

# set dates for links and new folder
today = date.today()
tomorrow = today + timedelta(1)

# get the final 20 gif layers in reverse order, starting with 24
number = 24
url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'
response = requests.get(url4)

# turn response into textfile of the source code.
source_code = response.text

# read the source code, save it, and turn it into a string.  
textfile = open('C:/Users/Games/CBSource.txt', 'a+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

# find matches using regex, and for every match download the image and number it.  resorted to asking copilot for help with my regex
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    print(number, link)
    urllib.request.urlretrieve(link, 'download ({}).png'.format(number))
    number = number - 1
    time.sleep(5)
os.remove('C:/Users/Games/CBSource.txt')

# get the first 4 images in reverse order, i copied my own code and changed the link.  Should have made a function and then fed it the links probably.
url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20'
response5 = requests.get(url5)
source_code = response5.text
textfile5 = open('C:/Users/Games/CBSource2.txt', 'a+')
textfile5.write(source_code)
textfile5.seek(0)
filetext = textfile5.read()
textfile5.close()

# find matches using regex, and for first 4 matches download the image and number it
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    if number >= 1:
        urllib.request.urlretrieve(link, 'download ({}).png'.format(number))
        print(number, link)
        number = number - 1
        time.sleep(5)
os.remove('C:/Users/Games/CBSource2.txt')

# move em where they usually go, repurposing code
src = "C:/Users/Games/"
dest = "C:/Users/Games/Downloads/"
files = os.listdir(src)
os.chdir(src)

# only move numbered png files
for file in files:
    if os.path.isfile(file): # probably don't need this because of the next if?
        if file.endswith(").png"):
            shutil.move(file, dest)

# name newfolder with date
directory = f"{today.month}-{today.day}"
parent_dir = "C:/Users/Games/Downloads"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)

# make sure everything is in the right place, no need to rush.  yet :)
# command for show desktop, and clicking an empty region on the proper monitor
time.sleep(5)
pyautogui.hotkey('win', 'd')
time.sleep(5)
pyautogui.click(1, 1)
time.sleep(5)

# hot keys to open gimp and then the plugin that load layers, export, scale, export gifs, quit, agree to not save
pyautogui.hotkey('ctrl', 'alt', 'g')
time.sleep(10)
pyautogui.hotkey('ctrl', 'alt', 'l')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(10)
pyautogui.hotkey('ctrl', 'q')
time.sleep(5)
pyautogui.hotkey('shift', 'tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(5)

# uploading big gif and getting link to use later,
url = "https://api.imgur.com/3/image"
payload = {'name': f'b{today.month}-{today.day}'}
files=[('image',('gif.gif',open('gif.gif','rb'),'image/gif'))]
headers = {'Authorization': 'Bearer f0e27b94e6f8ead1480763e666c8587b73365850'}
response = requests.request("POST", url, headers=headers, data=payload, files=files)

# find imgur url from api response
imgur_return = response.text
linkfile = open('C:/Users/Games/imgurlink.txt', 'a+')
linkfile.write(imgur_return)
linkfile.seek(0)
filetext = linkfile.read()
linkfile.close()
imgurlink = re.findall(r'https:\/\/i\.imgur\.com\/.*\.gif', filetext)
# ibg = imgurlink
# print (ibg)

# if i don't do it this way, the link comes out with ['brackets and quotes']
# that's probably because what i've been 're turned' is a list
# and the following only works because it's the only link in the JSON response
for imgur in imgurlink:
    ibg = imgur
os.remove('C:/Users/Games/imgurlink.txt')

# big gif is stored
src = "C:/Users/Games/Postman/files/gif.gif"
dest = f"C:/PyProjects/GMIP/{today.year}/{today.month}-{today.year}/"
shutil.move("C:/Users/Games/Postman/files/gif.gif", dest)
rename (f"C:/PyProjects/GMIP/{today.year}/{today.month}-{today.year}/gif.gif", f"C:/PyProjects/GMIP/{today.year}/{today.month}-{today.year}/b{today.month}-{today.day}.gif")

# open imgtalk to upload gif2
url3 = "https://www.talkimg.com/"
webbrowser.open(url3)

# pyautogui to the rescue
time.sleep(10)
pyautogui.click(953, 590)
time.sleep(5)
pyautogui.click(221, 479)
time.sleep(5)
pyautogui.typewrite("gif2.gif")
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(10)
pyautogui.hotkey('enter')
time.sleep(5)
pyautogui.click(949, 645)
time.sleep(5)
pyautogui.click(1276, 625)
time.sleep(5)
imgtalklink = pyperclip.paste()

# little gif is stored
src = "C:/Users/Games/Postman/files/"
dest = f"C:/PyProjects/GMIP/{today.year}/{today.month}-{today.year}/"
shutil.move("C:/Users/Games/Postman/files/gif2.gif", dest)
rename (f"C:/PyProjects/GMIP/{today.year}/{today.month}-{today.year}/gif2.gif", f"C:/PyProjects/GMIP/{today.year}/{today.month}-{today.year}/{today.month}-{today.day}.gif")

# make a list of files in downloads folder
src = "C:/Users/Games/Downloads"
dest = "C:/Users/Games/Downloads/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# only move numbered png files
for file in files:
    if os.path.isfile(file):
        if file.endswith(").png"):
            shutil.move(file, dest)  

# add post to clipboard for btctalk
pyperclip.copy(f"ChartBuddy's 24 hour Wall Observation recap\n[url={ibg}].{imgtalklink}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")

# what kind of newb posts their own post? ;)
# this is one way to get the most current wall observer page through the page source
url7 = 'https://bitcointalk.org/index.php?topic=178336.0'
response = requests.get(url7)

# turn response into textfile of the source code.
source_code = response.text

# read the source code, save it, and turn it into a string.  
textfile = open('C:/Users/Games/CBSource.txt', 'a+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()
os.remove('C:/Users/Games/CBSource.txt')

# look for all the links again
matches = re.findall(r'https:\/\/bitcointalk\.org\/index\.php\?topic=178336\.[0-9]+', filetext)

# start an empty list and fill it with all the links and split off the thread numbers
pages = []
for hit in matches:
    res = hit.rsplit('.', 1)[-1]
    # res2 = res[1:2:1], oh boy, i just found this comment before runtime, but i searched for res2 and this is the only instance so i'm going for it like this
    # “The Times 2/26/2024 07:24 utc DK on brink of second crash.”  let's do this
    pages.append(res)

# i knew link i wanted was 5 from the end, so start at -5 for, until -6 is hit, -1 each time
# what I figured out to do, but it was coming back [['what I want']]
theone = (pages[-5:-6:-1])

# getting tired so didn't even try, copilot said do this
theone = theone[0].strip('[]')

# I'm back!, time for more autopygui
url7 = 'https://bitcointalk.org/index.php?topic=178336.{}'.format(theone)
webbrowser.open(url7)
time.sleep(5)
pyautogui.click(1171, 347)
time.sleep(5)
pyautogui.hotkey('end')
time.sleep(5)
pyautogui.click(1627, 829)
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(5)
pyautogui.hotkey('ctrl', 'v')
time.sleep(5)
# hit post!
pyautogui.click(914, 1008)

#runtime is calculated
stopTime = time.perf_counter()
runtime = {stopTime - startTime}

# save to csv file
f = open('C:/PyProjects/runtimes.csv', 'a', newline='')
writer = csv.writer(f)
writer.writerow(runtime)

EDIT: added details about 'semicrash' circumstances
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
It didn't work, it didn't work, we. didn't. work.  Big crash, BUT,  huge breakthrough, although I need more testing to make sure this even worked...but.  Huge crashing progress!  With enough pyautoguis we could take over the world!  Firstly, the 2_22 build has worked great the last 2 days with runtimes of 239.2 s, and 252.5 s.  But we need less keystrokes.   Grin

Storylog:
1.Download
Works with a click Smiley

2.Import
Son of a batch.  Gonna take the L on this one for now.  Even copilot said it should be working.  Examples follow
Code:
PATH=%PATH%;"C:\Program Files\GIMP 2\bin"
gimp-2.10 --batch-interpreter python-fu-eval --pdb-compat-mode="on" -b "pdb.python_fu_loadlay" -b pdb.file_gif_save2 (image, drawable, "C:/PyProjects/tmp/gif.gif", "C:/PyProjects/tmp/gif.gif", 0, 1, 1000, 0, 0, 0, 0)

OR

PATH=%PATH%;"C:\Program Files\GIMP 2\bin"
gimp-2.10 --batch-interpreter python-fu-eval -b "pdb.python_fu_loadlay" -b "pdb.file_gif_save2" (image, drawable, "C:/PyProjects/tmp/gif.gif", "gif.gif", 0, 1, 1000, 0, 0, 0, 0)

OR

"C:\Program Files\GIMP 2\bin\gimp-console-2.10.exe" -i --batch-interpreter python-fu-eval -b "pdb.python_fu_loadlay" -b "(gimp-quit 1)"

But with pyautogui, I could just program in the mindless tabs and clicks.  Amazing!  So that I did.
 
3.Export
See above re: pyautogui's cool factor

4.Posting  
Figured out some of imgur's API, and how to crudely hack the link out of the imgur json response, i think, to get the link after uploading.  Full disclosure, i gave up and asked copilot, and then figured out how to make 'my' way work.  Then, there is, my attempt to use pyautogui, to upload a talkimg image and collect the link with no api.  I believe it would have worked, I think,  if I had put the script to sleep for longer, before moving the created gifs.  We'll see tomorrow...
Edge's copilot way
Code:
# Assuming 'api' contains the response object
response_data = json.loads(api.text)
image_link = response_data.get('data', {}).get('link')
ibg = image_link

5.Archive
tick tock

Code with two fails follows. I believe, one is easier to spot if you check the imgur filename.  The other, which I may be wrong about, is the need for a delay before moving the created gifs.

2_24 production build:
Code:
import csv, os, pyautogui, pyperclip, re, requests, shutil, subprocess, time, urllib.request, webbrowser
from datetime import timedelta, date
from os import rename
from tkinter import Tk

# start runtimer
startTime = time.perf_counter()

# set dates for links and new folder
today = date.today()
tomorrow = today + timedelta(1)
      
# learn to scrape 24 images 1 second at a time, yatta!
# get the final 20 gif layers in reverse order, starting with 24
number = 24
url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'
response = requests.get(url4)

# turn response into textfile of the source code.
source_code = response.text

# read the source code, save it, and turn it into a string.  
textfile = open('C:/Users/Games/CBSource.txt', 'a+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

# find matches using regex, and for every match download the image and number it.  resorted to asking copilot for help with my regex
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    print(link)
    urllib.request.urlretrieve(link, 'download ({}).png'.format(number))
    number = number - 1
    time.sleep(3)

#delete the source code
os.remove('C:/Users/Games/CBSource.txt')

# get the first 4 images in reverse order, i copied my own code and changed the link.  Should have made a function and then fed it the links probably.
url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20'
response5 = requests.get(url5)
source_code = response5.text
textfile5 = open('C:/Users/Games/CBSource2.txt', 'a+')
textfile5.write(source_code)
textfile5.seek(0)
filetext = textfile5.read()
textfile5.close()

# find matches using regex, and for first 4 matches download the image and number it
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    if number >=1:
        urllib.request.urlretrieve(link, 'download ({}).png'.format(number))
        number = number - 1
        time.sleep(3)
        print(link)
    
# delete the soure code
os.remove('C:/Users/Games/CBSource2.txt')

# move em where they usually go, repurposing code
src = "C:/Users/Games/"
dest = "C:/Users/Games/Downloads/"
files = os.listdir(src)
os.chdir(src)

# only move numbered png files
for file in files:
    if os.path.isfile(file): # probably don't need this because of the next if?
        if file.endswith(").png"):
            shutil.move(file, dest)  

# name newfolder with date
directory = f"{today.month}-{today.day}"
parent_dir = "C:/Users/Games/Downloads"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)
print("Directory '%s' created" %directory)

# hot keys for confirming plugin to open, load, export, scale, export gif, gif2, quit, agree to not save
# bye, bye, batch (for now :)
pyautogui.hotkey('ctrl', 'alt', 'g')
time.sleep(10)
pyautogui.hotkey('ctrl', 'alt', 'l')
time.sleep(5)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(10)
pyautogui.hotkey('ctrl', 'q')
time.sleep(5)
pyautogui.hotkey('shift', 'tab')
time.sleep(1)
pyautogui.hotkey('enter')

# uploading big gif and getting link to use later,
url = "https://api.imgur.com/3/image"
payload = {'name': 'b{today.month}-{today.day}'}
files=[('image',('gif.gif',open('gif.gif','rb'),'image/gif'))]
headers = {'Authorization': 'Bearer f0e27b94e6f8ead1480763e666c8587b73365850'}
response = requests.request("POST", url, headers=headers, data=payload, files=files)

# repurposing some code, that means define a function, right?
# find imgur url from api response
imgur_return = response.text
linkfile = open('C:/Users/Games/imgurlink.txt', 'a+')
linkfile.write(imgur_return)
linkfile.seek(0)
filetext = linkfile.read()
linkfile.close()

# delete the link.txt
os.remove('C:/Users/Games/imgurlink.txt')

imgurlink = re.findall(r'https:\/\/i\.imgur\.com\/.*\.gif', filetext)
# ibg = imgurlink
# print (ibg)

# if i don't do it this way the link comes out with ['brackets and quotes']
for imgur in imgurlink:
    ibg = imgur

# big gif is moved
src = "C:/Users/Games/Postman/files/gif.gif"
dest = "C:/PyProjects/GMIP/2024/2-2024/"
shutil.move("C:/Users/Games/Postman/files/gif.gif", dest)
rename ("C:/PyProjects/GMIP/2024/2-2024/gif.gif", f"C:/PyProjects/GMIP/2024/2-2024/b{today.month}-{today.day}.gif")

# little gif is moved
src = "C:/Users/Games/Postman/files/"
dest = "C:/PyProjects/GMIP/2024/2-2024/"
shutil.move("C:/Users/Games/Postman/files/gif2.gif", dest)
rename ("C:/PyProjects/GMIP/2024/2-2024/gif2.gif", f"C:/PyProjects/GMIP/2024/2-2024/{today.month}-{today.day}.gif")

# ID files
src = "C:/Users/Games/Downloads"
dest = "C:/Users/Games/Downloads/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# only move numbered png files
for file in files:
    if os.path.isfile(file):
        if file.endswith(").png"):
            shutil.move(file, dest)  

# open websites to upload gifs
url3 = "https://www.talkimg.com/"
webbrowser.open(url3)

# pyautogui to the rescue
time.sleep(5)

# click start uploading
pyautogui.click(953, 590)
time.sleep(5)

# click file enter box
pyautogui.click(221, 479)
time.sleep(5)

# type name of small gif
pyautogui.typewrite("gif2.gif")
time.sleep(5)

# move selection to save
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('tab')
time.sleep(1)
pyautogui.hotkey('enter')
time.sleep(5)
pyautogui.click(949, 645)
time.sleep(5)

# click mouse to copy the talkimg link
pyautogui.click(1276, 625)

imgtalklink = pyperclip.paste()

# add post to clipboard for btctalk
r = Tk()
r.withdraw()
r.clipboard_clear()
r.clipboard_append(f"ChartBuddy's 24 hour Wall Observation recap\n[url={ibg}].{imgtalklink}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
r.update()

#this holds the post on the clipboard until posted
print("All done?")
input()

#runtime is calculated
stopTime = time.perf_counter()
runtime = {stopTime - startTime}

# save to csv file
f = open('C:/PyProjects/runtimes.csv', 'a', newline='')
writer = csv.writer(f)
writer.writerow(runtime)

EDIT: added comments to code about talkimg, pyautogui.  phrasing
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
It worked, it worked, we worked! Man and machine in harmony, like the ending of...well I don't want to spoil that one.   Wink The runtime was a surprisingly high at 288.3 s, which I guess I have no reason to doubt, but I should manually test the accuracy of my method.  I did set a delay of 5 seconds between downloads, when manually it was probably in the 2-3 second range between downloads.   But the real savings is in key strokes.  Let's see...graciously estimating
1. Scroll through the day and download each ChartBuddy post.  Scrolling through usually around 4 pages to, right click, left click, enter 24 times, which would be (3*24)+3
2. Drag each downloaded image into GIMP as a new layer Click and drag 24 images, i hope i only did that a few times , 2*24 more actions
3. Export full size gif to imgur for the clickable link, and export optimized gif for the in-thread talkimg hosted one Oh boy, lets remember, export as, 2 clicks, type the name.gif, 5, timing the frames, 4 actions, click scale and choose size, 5 actions, export again, (2+5+4+5)*2
4. Put together the post and post  Let's not count the typing, clicking on 2 bookmarks, 2 click and drags, 6
5. Archive images for later use in a monthly replay  creating a new folder and naming it by date, 7 clicks and keystrokes and a click and drag, 9 = 168 total
168 actions.

And this time it was.  
1.Download: Click run, sit back and relax, 1
2.Import: loading, ctrl-alt-l, 3 tabs, enter, 7
3.Export: ctrl-alt-b, 2 tabs, enter, ctrl-alt-s, 2 tabs, enter, enter to script, 13
4.Post: same, 6
5.Archive: 0
27 actions, most of them mindless clicking through GIMP which I believe can do everything I need it to, all from the command line.  Exciting stuff!

Changelog:
1.Download Wow.  I am now auto downloading the images from BCT, and naming them in the process.

3.Export  I changed the GIMP gif saving plugin, by adding the resize of the gif in the same plugin.  So instead of save, resize, save (which all have keyboard shortcuts), it's now just save, save2.  I've duplicated the GIMP gif saving plugin so I don't have to wait to move the big gif, called, gif.gif before the small gif is exported, which was also called gif.gif.  The small gif is now gif2.gif. Which also means I get to get rid of 2 user inputs, which were only there to stop the code, waiting for me to tell it to proceed. Well, it still has to wait once I guess, but it will be so much faster, one less keystroke for sure. Smiley  

Working on.
2.Import Work in Progress:  I've got a code example of using the command line to start a Python script, that calls GIMP plug ins.  I'm trying to modify it for this purpose.
4.Post I guess I should have been working on that runtime posting bot after all.  Smiley
5.Archive  Blessed with an extra day this month to figure the code for the monthly recap.  

Current Code:
Code:
from datetime import timedelta, date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv
import re
import urllib.request
import requests

# start runtimer
startTime = time.perf_counter()

# set dates for links and new folder
today = date.today()
tomorrow = today + timedelta(1)
      
# open websites to upload gifs
url2 = "https://imgur.com/upload"
url3 = "https://www.talkimg.com/"
webbrowser.open(url2)
webbrowser.open(url3)

# get the final 20 gif layers in reverse order, starting with 24
number = 24
url4 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'
response = requests.get(url4)

# turn response into textfile of the source code.
source_code = response.text

# read the source code, save it, and turn it into a string.  
textfile = open('C:/Users/Games/CBSource.txt', 'a+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

# find matches using regex, and for every match download the image and number it.  resorted to asking copilot for help with my regex
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    print(link)
    urllib.request.urlretrieve(link, 'download ({}).png'.format(number))
    number = number - 1
    time.sleep(5)

#delete the source code
os.remove('C:/Users/Games/CBSource.txt')

# get the first 4 images in reverse order, i copied my own code and changed the link.  Should have made a function and then fed it the links probably.
url5 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20'
response5 = requests.get(url5)
source_code = response5.text
textfile5 = open('C:/Users/Games/CBSource2.txt', 'a+')
textfile5.write(source_code)
textfile5.seek(0)
filetext = textfile5.read()
textfile5.close()

# find matches using regex, and for first 4 matches download the image and number it
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    if number >=1:
        urllib.request.urlretrieve(link, 'download ({}).png'.format(number))
        number = number - 1
        time.sleep(5)
        print(link)
    
# delete the soure code
os.remove('C:/Users/Games/CBSource2.txt')

# move em where they usually go, repurposing code
src = "C:/Users/Games/"
dest = "C:/Users/Games/Downloads/"
files = os.listdir(src)
os.chdir(src)

# only move numbered png files
for file in files:
    if os.path.isfile(file): # probably don't need this because of the next if?
        if file.endswith(").png"):
            shutil.move(file, dest)  

# name newfolder with date
directory = f"{today.month}-{today.day}"
parent_dir = "C:/Users/Games/Downloads"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)
print("Directory '%s' created" %directory)

# learn to scrape 24 images 1 second at a time, yatta!
# else manually download each file

# automatically open gimp, then filter to load all images
subprocess.Popen([r'C:/Program Files/GIMP 2/bin/gimp-2.10.exe'])

# export gifs press the any key then enter
print("Are the gifs exported?")
input()
print("Movin' on...")

# big gif is moved
src = "C:/PyProjects/tmp/"
dest = "C:/PyProjects/GMIP/2024/2-2024/"
shutil.move("C:/PyProjects/tmp/gif.gif", dest)
rename ("C:/PyProjects/GMIP/2024/2-2024/gif.gif", f"C:/PyProjects/GMIP/2024/2-2024/b{today.month}-{today.day}.gif")

# little gif is moved
src = "C:/PyProjects/tmp/"
dest = "C:/PyProjects/GMIP/2024/2-2024/"
shutil.move("C:/PyProjects/tmp/gif2.gif", dest)
rename ("C:/PyProjects/GMIP/2024/2-2024/gif2.gif", f"C:/PyProjects/GMIP/2024/2-2024/{today.month}-{today.day}.gif")

# ID files
src = "C:/Users/Games/Downloads"
dest = "C:/Users/Games/Downloads/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# only move numbered png files
for file in files:
    if os.path.isfile(file):
        if file.endswith(").png"):
            shutil.move(file, dest)  

# upload to two sites, gather links to input into console
ibg = input("imgur big gif link here")
imgtalk = input("imgtalk little gif link here")

# add post to clipboard for btctalk
r = Tk()
r.withdraw()
r.clipboard_clear()
r.clipboard_append(f"ChartBuddy's 24 hour Wall Observation recap\n[url={ibg}].{imgtalk}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
r.update()

#this holds the post on the clipboard until posted
print("All done?")
input()

#runtime is calculated
stopTime = time.perf_counter()
runtime = {stopTime - startTime}

# save to csv file
f = open('C:/PyProjects/runtimes.csv', 'a', newline='')
writer = csv.writer(f)
writer.writerow(runtime)
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
It was the best of times, it was the worst of times.
Three days with no runtimes.  The first two were because I knew the code would fail.  I have no error handling for what happens if ChartBuddy decides to take an hour off, to maybe go out for a pizza.  So I didn't even try.  

But now!  Such progress!  I found the pages on bitcointalk.org that have ChartBuddy's last 20 posts , and I have the auto download code, and it worked!  It's not the best way or the fastest way probably, and i needed copilot's help, but it is definitely my way. Smiley Because it then crashed.  Huh  But, here is the relevant new code, and with no BeautifulSoup, i only needed the page source to grab the links.  I knew the links would be in the form of https://www.talkimg.com/images/(4 digit year)/(2 digit month)/(2 digit date)/*****.png, where only the last 5 bits of the file would be different.  I tried and tried, but had to resort to copilot to get my regex correct.
Code:
# get the last 20 images in reverse order, starting with 24, ChartBuddy is user 110685
url = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'

# send a get request and get the response object
response = requests.get(url)

# turn response into textfile of the source code, not sure if needed, what else did I request?
source_code = response.text

# read the source code, save it, and turn it into a string.  Why am i saving it if I'm going to delete it, can probably skip this instruction, but it might help with error handling?  
textfile = open('C:/Users/Games/CBSource.txt', 'a+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

number = 24

# find matches using regex, and for every match download the image, and number it.  i resorted to asking copilot for help with my regex
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    print(link)
    urllib.request.urlretrieve(link, 'download({}).png'.format(number))
    number = number - 1
    time.sleep(5)

#delete the source code
os.remove('C:/Users/Games/CBSource.txt')

One thing i'm still ignorant of, which is okay, is why i can sometimes pass variables directly, sometime one has to use curly brackets, and other times empty curly brackets with defining at the end, but learning is living.

Here is the full script i ran today, which I know is going to work tomorrow, after making a tiny change.  Or maybe it won't. Cheesy  If it does I am going to try and work on error handling. .  
If you can see why this script crashed you are an awesome debugger!  (hint: it happens early  Grin)
Code:
from datetime import timedelta, date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv
import re
import urllib.request
import requests

# start runtimer
startTime = time.perf_counter()

#set dates for links and new folder
today = date.today()
tomorrow = today + timedelta(1)
      
# open links to download and upload images if only I could get out of the way
# url1 = f"https://injastic.space/search?after_date={today.strftime("%Y")}-{today.strftime("%m")}-{today.strftime("%d")}T07%3A55%3A00&author=ChartBuddy&before_date={tomorrow.strftime("%Y")}-{tomorrow.strftime("%m")}-{tomorrow.strftime("%d")}T07%3A55%3A00"
url2 = "https://imgur.com/upload"
url3 = "https://www.talkimg.com/"
webbrowser.open(url1)
webbrowser.open(url2)
webbrowser.open(url3)

# holy cow is this going to work all together on the first try
# get the last 20 images in reverse order, starting with 24
# set the file numbering start
number = 24

# get the last 20 images in reverse order, starting with 24
url = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=0'

# Send a get request and get the response object
response = requests.get(url)

# turn response into textfile of the source code, not sure if needed, what else did I request?
source_code = response.text

# read the source code, save it, and turn it into a string.  Why am i saving it if I'm going to delete it?  
textfile = open('C:/Users/Games/CBSource.txt', 'a+')
textfile.write(source_code)
textfile.seek(0)
filetext = textfile.read()
textfile.close()

# find matches using regex, and for every match download the image and number it.  resorted to asking copilot for help with my regex
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    print(link)
    urllib.request.urlretrieve(link, 'download({}).png'.format(number))
    number = number - 1
    time.sleep(5)

#delete the source code
os.remove('C:/Users/Games/CBSource.txt')

# get the first 4 images in reverse order, i copied my own code and changed the link.  Should have made a function and then fed it the links probably.
# i renamed everything here with 2, i'm not sure i needed to, but I think I did it correctly
url2 = 'https://bitcointalk.org/index.php?action=profile;u=110685;sa=showPosts;start=20'

# Send a GET request and get the response object
response2 = requests.get(url2)

# turn response into textfile of source
source_code = response2.text

# read the source code and turn it into a string
textfile2 = open('C:/Users/Games/CBSource2.txt', 'a+')
textfile2.write(source_code)
textfile2.seek(0)
filetext = textfile2.read()
textfile2.close()

# find matches using regex, and for first 4 matches download the image and number it
# tried using finditer i think it was to set a limit for the first four results, but i was getting another string, so this seemed like a workaround
matches = re.findall(r'https:\/\/www\.talkimg\.com\/images\/\w{4}/\w{2}\/\w{2}\/\w{5}\.png', filetext)
for link in matches:
    if number >=1:
        urllib.request.urlretrieve(link, 'download({}).png'.format(number))
        number = number - 1
        time.sleep(5)
        print(link)
    
# delete the soure code
os.remove('C:/Users/Games/CBSource2.txt')

# move em where they usually repurposing code
# ID files
src = "C:/Users/Games/"
dest = "C:/Users/Games/Downloads/"
files = os.listdir(src)
os.chdir(src)

# i have named the new downloads to look like the old manual downloads
# only move numbered png files
for file in files:
    if os.path.isfile(file): # probably don't need this because of the next if?
        if file.endswith(").png"):
            shutil.move(file, dest)  

#back to the old code
# name newfolder with date
directory = f"{today.month}-{today.day}"
parent_dir = "C:/Users/Games/Downloads"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)
print("Directory '%s' created" %directory)

# learn to scrape 24 images 1 second at a time,
# else manually download each file

# automatically open gimp, then filter to load all images
subprocess.Popen([r'C:/Program Files/GIMP 2/bin/gimp-2.10.exe'])

# export big gif press the any key then enter
print("Is big gif exported?")
input()
print("Movin' on...")

# big gif is moved
src = "C:/PyProjects/tmp/"
dest = "C:/PyProjects/GMIP/2024/2-2024/"
shutil.move("C:/PyProjects/tmp/gif.gif", dest)
rename ("C:/PyProjects/GMIP/2024/2-2024/gif.gif", f"C:/PyProjects/GMIP/2024/2-2024/b{today.month}-{today.day}.gif")

# scale image and export little gif
print("Is little gif exported?")
input()
print("Movin' on...")

# little gif is moved
src = "C:/PyProjects/tmp/"
dest = "C:/PyProjects/GMIP/2024/2-2024/"
shutil.move("C:/PyProjects/tmp/gif.gif", dest)
rename ("C:/PyProjects/GMIP/2024/2-2024/gif.gif", f"C:/PyProjects/GMIP/2024/2-2024/{today.month}-{today.day}.gif")

# ID files
src = "C:/Users/Games/Downloads"
dest = "C:/Users/Games/Downloads/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# i have a dummy file present so new downloads look like download(*).png
# only move numbered png files
for file in files:
    if os.path.isfile(file):
        if file.endswith(").png"):
            shutil.move(file, dest)  

# upload to two sites, gather links to input into console
ibg = input("imgur big gif link here")
imgtalk = input("imgtalk little gif link here")

# add post to clipboard for btctalk
r = Tk()
r.withdraw()
r.clipboard_clear()
r.clipboard_append(f"ChartBuddy's 24 hour Wall Observation recap\n[url={ibg}].{imgtalk}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
r.update()

#this holds the post on the clipboard until posted
print("All done?")
input()

#runtime is calculated
stopTime = time.perf_counter()
runtime = {stopTime - startTime}

# save to csv file
f = open('C:/PyProjects/runtimes.csv', 'a', newline='')
writer = csv.writer(f)
writer.writerow(runtime)

EDIT: added details about regex, added a line of code defining number to first snippet, changelog soon Smiley  
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
I've decided that unless the man himself, Richy_T, says so, I'm not going to alter the images posted.  Back to more faux-ding I guess.  Grin

But I did come up with some prototypes for our perusal. I'd love to post the recap at midnight UTC, but for that we would need full bot mode, I presume.

First up is what I call barlay, for the progress bar and importing the layers together.
Code:
https://imgur.com/a/r1LloJI
https://imgur.com/a/r1LloJI

Next is barlaytouch, which was an accident, but I thought it looked cool.
Code:
https://imgur.com/a/J8b9mQM
https://imgur.com/a/J8b9mQM

Datelay is a prototype only, getting the spacing right on different days would be a lot of ifs to deal with variable kerning. Smiley
Code:
https://imgur.com/a/MwSiujF
https://imgur.com/a/MwSiujF

Then we have the fun ccbarlay, for color changing.  Took me awhile to figure out the the RGB values switched on me from 0-255 to 0-1.0  
Code:
https://imgur.com/a/9Ijom6U
https://imgur.com/a/9Ijom6U

Code:
import sys, os, re, traceback
from collections import namedtuple
from gimpfu import *
import gimpcolor
# personal use of font thanks to https://chequered.ink/font-license/

def plugin_ccbarlay(image, drawable):
    
    image = pdb.gimp_file_load("C:/Users/Games/Downloads/download (0).xcf", "/download (0).xcf")
    display = pdb.gimp_display_new(image)
    colr = 0.0
    colg = 0.75

    

    for items in range(1,25):
        xlength = items * 15
        pdb.gimp_context_set_foreground(gimpcolor.RGB(colr, colg, 0))
        location = r"C:/Users/Games/Downloads/download ({}).png".format(items)
        layer = pdb.gimp_file_load_layer(image, location)
        pdb.gimp_image_insert_layer(image, layer, None, -1)
        pdb.gimp_image_select_rectangle(image, 2, 220, 25, xlength, 20)
        pdb.gimp_drawable_edit_bucket_fill(layer, 0, 340, 10)
        layer = pdb.gimp_text_fontname (image, None, 315, 40, "0:00 UTC ^", 0, True, 30, PIXELS, "Withheld Data")
        pdb.gimp_image_merge_down(image, layer, 1)
        colr = colr + 0.03
        colg = colg - 0.03
        
register(
        "python-fu-ccbarlay",
        "This loads layers with a progress bar",
        "Very specific use case",
        "author: DK",
        "copyright: probably not",
        "date: 2024",
        "/Filters/ccbarlay",
        "",
        [
            

        ],
        [],
        plugin_ccbarlay)

main()

And saving the best for last, imo, what I call daylay.  This one has the full dark earth at midnight utc.  It was a fun challenge figuring out how to cycle through the 'letters'.
Code:
https://imgur.com/a/XY5vjts
https://imgur.com/a/XY5vjts

Code:
from gimpfu import *
import gimpcolor
# personal use of font called moon_phases.ttf thanks to Curtis Clark

def plugin_daylay(image, drawable):
    
    image = pdb.gimp_file_load("C:/Users/Games/Downloads/download (0).xcf", "/download (0).xcf")
    display = pdb.gimp_display_new(image)
    pdb.gimp_context_set_foreground(gimpcolor.RGB(58, 118, 222))
  
    # loads layers and prints moondings u - t skipping z and a, so that the m full moonding happens at 0:00 utc
    for items in range(1,25):
        location = r"C:/Users/Games/Downloads/download ({}).png".format(items)
        layer = pdb.gimp_file_load_layer(image, location)
        pdb.gimp_image_insert_layer(image, layer, None, -1)
        
        #these ifs use the proper fudge factor for the layer number conversion to ascii number
        if items <= 5:
            i = items + 116
            alpha = chr(i)
            layer = pdb.gimp_text_fontname (image, None, 365, 0, alpha, 0, True, 80, PIXELS, "day Phases")
            pdb.gimp_image_merge_down(image, layer, 1)
        
        if items >= 6:
            i = items + 92
            alpha = chr(i)
            layer = pdb.gimp_text_fontname (image, None, 365, 0, alpha, 0, True, 80, PIXELS, "day Phases")
            pdb.gimp_image_merge_down(image, layer, 1)
      

register(
        "python-fu-daylay",
        "This loads layers with a progress bar",
        "Very specific use case",
        "author: DK",
        "copyright: probably not",
        "date: 2024",
        "/Filters/daylay",
        "",
        [
            

        ],
        [],
        plugin_daylay)

main()

So while I had a lot of fun today, and know more about GIMP now, I think the biggest thing I realized is it looks like it's time to learn more about ImageMagick. Smiley
But I have the scripts if anyone has a cool font or color you think I might like.  

Latest runtime: 279.1 seconds, and I had to put back the plug in I really wanted, and restart GIMP so that's around 15 seconds...Edit: spelling
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
I'm so excited, I figured out how to code a progress bar in GIMP!

I tied the length of the rectangle I was selecting to be paint bucketed, to the frame layer of the gif.

Keep this under your hat, I'm gonna cosplay as a graphic designer tomorrow and see what i can do with this.  My goal is to show when UTC 0:00 is relative to the progress bar.
Code:
https://imgur.com/a/ZC95ndv

https://imgur.com/a/ZC95ndv

Code:
import sys, os, re, traceback
from collections import namedtuple
from gimpfu import *

def plugin_textlay(image, drawable):
   
#load base layer with custom frametime
    image = pdb.gimp_file_load("C:/Users/Games/Downloads/download (0).xcf", "/download (0).xcf")
    display = pdb.gimp_display_new(image)

#load additional layers growing the status bar
    for items in range(1,25):
        xlength = items * 15
        location = r"C:/Users/Games/Downloads/download ({}).png".format(items)
        layer = pdb.gimp_file_load_layer(image, location)
        pdb.gimp_image_insert_layer(image, layer, None, -1)
        pdb.gimp_image_select_rectangle(image, 2, 220, 25, xlength, 30)
        pdb.gimp_drawable_edit_bucket_fill(layer, 0, 340, 10)
       
register(
        "python-fu-textlay",
        "This loads layers and bar",
        "Very specific use case",
        "author: DK",
        "copyright: probably not",
        "date: 2024",
        "/Filters/textlay",
        "",
        [
           

        ],
        [],
        plugin_textlay)

main()

I'm sure it's the same, but different with how other code works, but I enjoy using Python and being able to define a variable, and then being able to just plug it in here and there.

Latest runtime: 270.9 seconds

legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Wouldn't ImageMagick be easier to automate things than The GIMP?
Possibly.  For which step in particular?
I can't tell, I've never used The GIMP for automating anything, but I use ImageMagick a lot. I mentioned it because it may be worth looking into.

Wow, that seems like quite the tool!  I'm not sure I need all the power it provides, so I'm going to stick to GIMP for now.  Thank you though!
You're welcome Smiley Having more options is always good Smiley
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
I messed up again, but it was my fault.  Exploring imgur's api i got logged out, so when I was dragging the gif into imgur i kept getting an error.  I incorrectly attributed this error to imgur being too busy at the moment, which was a message i frequently received when trying to set up postman, but it turns out i wasn't logged in to imgur.  After logging in everything else went according to plan.  Runtime = 335.2 seconds.

Wouldn't ImageMagick be easier to automate things than The GIMP?

Wow, that seems like quite the tool!  I'm not sure I need all the power it provides, so I'm going to stick to GIMP for now.  Thank you though!

Changelog:
GIMP script now autoresizes the big gif into the imgtalk size gif, after exporting.  So I just have to hit Ctrl-Alt-S, click enter in VS, Ctrl-Alt-S again, click enter in VS and feed in the links
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Oh wow.  It worked!  Cool  I stopped midway to reread the script and make sure I knew where the first gif was going.  Didn't want a file to already be there.
Plus I always watch the gifs before posting to make sure everything is in order, but that is just a minute or so.   These are definitely rookie numbers: 369.8 seconds

I think with this code, trusting it, getting better at the keyboard shortcuts, and having the right folder open to drag images to the image hosting sites we could get that down to under 60 seconds.  But I know we can automate more, mainly on the downloading the images, and the GIMP processing going unattended.  Heh, I should code a bot to post the nightly runtimes...actually, I won't spam the place up, but that might be good practice, but first things first.

EDIT:
No hiccups 2 days in a row.  Grin  Runtime was 251.3 seconds.  Wasn't in any particular rush, enjoying all the pictures.   Smiley

Next steps:  learn to link gimp plug-ins using scripts running unattended, keep messing around with web scraping, streamline the current script, and I better start working on the script to make the monthly gif.  I should be able to modify the current gimp plugin that loads each day's 24 images, to be a loop inside of a loop.  We'll see. 
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Wouldn't ImageMagick be easier to automate things than The GIMP?

Possibly.  For which step in particular?  I do have prior experience with GIMP and am pleased with how my learning Python is tying in with it.  I'll look into it!

Okay.  I'm sure everyone sees the problem and y'all shouldn't have to wait.  We have two many changing variables.  Wink  Let's make the strftime work with random dates as well.

if statements, with random date:  26.7698 ms per link created,
strftime, with random date:          26.9713 ms per link created

Pretty close there, both have nice results within them, but probably too close to tell without more data.  What if we import the date for both?
lookup today ifs:   0.003934090 ms per link created
strftime lookup:    0.015847180 ms per link created

Seems like the ifs are conclusively faster, but today's date leads to less if statements being tested against, if i understand things, than if it was a 'change of month or year date' link being made.  So I ran those as well, these times are in ms to make one link using the date provided.  Goal: fix this once i figure out tables.  i tried

all numbers are how long it took in milliseconds for each link to be created (ms/link)
lookup today ifs      set date ifs        if new month         if new year        if test rand        strftime lookup      strftime rand
0.00393409               0.00279952       0.00279952         0.002679110             26.7699           0.0158472              26.9713
0.00479044               0.00304908        0.00309240         0.00295039               29.5539           0.0154960              28.9656

But it seems, for now, the 'if' submission will lead to a higher score in time completion, while the 'strftime' will lead to a higher score in the line length competition.  Don't let perfect be the enemy of good.  Keep calm and make mistakes.

Here's all the different code
Code:
>>>...>>>...>>>lookup today ifs
import datetime
from datetime import timedelta
from datetime import date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv
from faker import Faker

# start range
for i in range(0, 10000):
  
    # start runtimer
    tsubi = time.perf_counter()

    # set dates for link range and newf
    tod = date.today()
    tom = tod + timedelta(days = 1)

    # check dates and create link
    if tod.day < 9 and tod.month <= 9:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day < 9 and tod.month >= 10:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month <= 9:
        # create link for day 9 with tommorow 10 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month >= 10:
        # create link for day 9 with tommorow 10 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link
        
    if tod.day > 9 and tod.month <= 9 and tod.month == tom.month:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day > 9 and tod.month >= 10 and tod.month == tom.month:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.month <= 8 and tod.month != tom.month:
        # create link for last day of months 1-8
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.month == 9 and tod.month != tom.month:
        # create link for last day of month 9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if (tod.month == 10 or tod.month == 11) and tod.month != tom.month:
        # create link for last day of month 10,11
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.year != tom.year:
        # create link for last day of the year
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tom.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    #runtime is calculated and printed
    tsubf = time.perf_counter()
  
    # calculate and save to csv file
    runtime = {tsubf-tsubi}    
    f = open('C:/PyProjects/tmp/if_test_nonew.csv', 'a', newline='')
    writer = csv.writer(f)
    writer.writerow(runtime)

>>>...>>>...>>>set date ifs
import datetime
from datetime import timedelta
from datetime import date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv
from faker import Faker

# start range
for i in range(0, 10000):
  
    # start runtimer
    tsubi = time.perf_counter()

    # set dates for link range and newf
    tod = datetime.datetime(2024, 1, 29)
    tom = tod + timedelta(days = 1)

    # check dates and create link
    if tod.day < 9 and tod.month <= 9:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day < 9 and tod.month >= 10:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month <= 9:
        # create link for day 9 with tommorow 10 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month >= 10:
        # create link for day 9 with tommorow 10 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link
        
    if tod.day > 9 and tod.month <= 9 and tod.month == tom.month:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day > 9 and tod.month >= 10 and tod.month == tom.month:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.month <= 8 and tod.month != tom.month:
        # create link for last day of months 1-8
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.month == 9 and tod.month != tom.month:
        # create link for last day of month 9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if (tod.month == 10 or tod.month == 11) and tod.month != tom.month:
        # create link for last day of month 10,11
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.year != tom.year:
        # create link for last day of the year
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tom.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    #runtime is calculated and printed
    tsubf = time.perf_counter()
  
    # calculate and save to csv file
    runtime = {tsubf-tsubi}    
    f = open('C:/PyProjects/tmp/if_test_setdate.csv', 'a', newline='')
    writer = csv.writer(f)
    writer.writerow(runtime)

>>>...>>>...>>>if new month
import datetime
from datetime import timedelta
from datetime import date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv
from faker import Faker

# start range
for i in range(0, 10000):
  
    # start runtimer
    tsubi = time.perf_counter()

    # set dates for link range and newf
    tod = datetime.datetime(2024, 2, 29)
    tom = tod + timedelta(days = 1)

    # check dates and create link
    if tod.day < 9 and tod.month <= 9:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day < 9 and tod.month >= 10:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month <= 9:
        # create link for day 9 with tommorow 10 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month >= 10:
        # create link for day 9 with tommorow 10 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link
        
    if tod.day > 9 and tod.month <= 9 and tod.month == tom.month:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day > 9 and tod.month >= 10 and tod.month == tom.month:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.month <= 8 and tod.month != tom.month:
        # create link for last day of months 1-8
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.month == 9 and tod.month != tom.month:
        # create link for last day of month 9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if (tod.month == 10 or tod.month == 11) and tod.month != tom.month:
        # create link for last day of month 10,11
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.year != tom.year:
        # create link for last day of the year
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tom.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    #runtime is calculated and printed
    tsubf = time.perf_counter()
  
    # calculate and save to csv file
    runtime = {tsubf-tsubi}    
    f = open('C:/PyProjects/tmp/if_test_newmonth.csv', 'a', newline='')
    writer = csv.writer(f)
    writer.writerow(runtime)

>>>...>>>...>>>if new year
import datetime
from datetime import timedelta
from datetime import date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv
from faker import Faker

# start range
for i in range(0, 10000):
  
    # start runtimer
    tsubi = time.perf_counter()

    # set dates for link range and newf
    tod = datetime.datetime(2001, 12, 31)
    tom = tod + timedelta(days = 1)

    # check dates and create link
    if tod.day < 9 and tod.month <= 9:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day < 9 and tod.month >= 10:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month <= 9:
        # create link for day 9 with tommorow 10 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month >= 10:
        # create link for day 9 with tommorow 10 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link
        
    if tod.day > 9 and tod.month <= 9 and tod.month == tom.month:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day > 9 and tod.month >= 10 and tod.month == tom.month:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.month <= 8 and tod.month != tom.month:
        # create link for last day of months 1-8
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.month == 9 and tod.month != tom.month:
        # create link for last day of month 9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if (tod.month == 10 or tod.month == 11) and tod.month != tom.month:
        # create link for last day of month 10,11
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.year != tom.year:
        # create link for last day of the year
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tom.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    #runtime is calculated and printed
    tsubf = time.perf_counter()
  
    # calculate and save to csv file
    runtime = {tsubf-tsubi}    
    f = open('C:/PyProjects/tmp/if_test_newyear.csv', 'a', newline='')
    writer = csv.writer(f)
    writer.writerow(runtime)


>>>...>>>...>>>if test rand
import datetime
from datetime import timedelta
from datetime import date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv
from faker import Faker

# start range
for i in range(0, 10000):

    # start runtimer
    tsubi = time.perf_counter()

    # make fake datetime
    fake = Faker()
    tod = fake.future_datetime('+50y', None)
    tom = tod + timedelta(days = 1)

    # check dates and create link
    if tod.day < 9 and tod.month <= 9:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day < 9 and tod.month >= 10:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-0{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month <= 9:
        # create link for day 9 with tommorow 10 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day == 9 and tod.month >= 10:
        # create link for day 9 with tommorow 10 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link
        
    if tod.day > 9 and tod.month <= 9 and tod.month == tom.month:
        # create link for days 1-8 and month 1-9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.day > 9 and tod.month >= 10 and tod.month == tom.month:
        # create link for days 1-8 and month 10,11,12
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
            return link

    if tod.month <= 8 and tod.month != tom.month:
        # create link for last day of months 1-8
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.month == 9 and tod.month != tom.month:
        # create link for last day of month 9
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if (tod.month == 10 or tod.month == 11) and tod.month != tom.month:
        # create link for last day of month 10,11
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    if tod.year != tom.year:
        # create link for last day of the year
        def CBuddy():
            link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tom.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
            return link
        
    #runtime is calculated and printed
    tsubf = time.perf_counter()
  
    # calculate and save to csv file
    runtime = {tsubf-tsubi}    
    f = open('C:/PyProjects/tmp/if_test_random.csv', 'a', newline='')
    writer = csv.writer(f)
    writer.writerow(runtime)

>>>...>>>...>>>strftime lookup
import datetime
from datetime import timedelta
from datetime import date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv
from faker import Faker

# start range
for i in range(0, 10000):

    # start runtimer
    tsubi = time.perf_counter()

    #set dates for links and new folder
    today = date.today()
    tomorrow = today + timedelta(1)

    # open links to download and upload images if only I could get out of the way
    url1 = f"https://jastic.space/search?after_date={today.strftime("%Y")}-{today.strftime("%m")}-{today.strftime("%d")}T07%3A55%3A00&author=ChartBuddy&before_date={tomorrow.strftime("%Y")}-{tomorrow.strftime("%m")}-{tomorrow.strftime("%d")}T07%3A55%3A00"

    tsubf = time.perf_counter()
  
    # calculate and save to csv file
    runtime = {tsubf-tsubi}    
    f = open('C:/PyProjects/tmp/strftime_test.csv', 'a', newline='')
    writer = csv.writer(f)
    writer.writerow(runtime)

>>>...>>>...>>>strftime rand
import datetime
from datetime import timedelta
from datetime import date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv
from faker import Faker

# start range
for i in range(0, 10000):

    # start runtimer
    tsubi = time.perf_counter()

    # make fake datetime
    fake = Faker()
    today = fake.future_datetime('+50y', None)
    tomorrow = today + timedelta(days = 1)

    # open links to download and upload images if only I could get out of the way
    url1 = f"https://jastic.space/search?after_date={today.strftime("%Y")}-{today.strftime("%m")}-{today.strftime("%d")}T07%3A55%3A00&author=ChartBuddy&before_date={tomorrow.strftime("%Y")}-{tomorrow.strftime("%m")}-{tomorrow.strftime("%d")}T07%3A55%3A00"

    tsubf = time.perf_counter()
  
    # calculate and save to csv file
    runtime = {tsubf-tsubi}    
    f = open('C:/PyProjects/tmp/strftime_withR_test.csv', 'a', newline='')
    writer = csv.writer(f)
    writer.writerow(runtime)

Edit: fixed incorrect savefile in set date ifs, response, added another run of data, I must have messed something up with the first run, getting the identical results?  Still pretty close, though, the second run. I put everything into one code box.  I would like to check the code and rerun the tests, and try tables again.

legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Wouldn't ImageMagick be easier to automate things than The GIMP?
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Last night was another crash and burn, but this one was again my fault.  What happened was that I first started learning with Anaconda and Jupyter, and when I started trying Visual Studio I couldn't import 'requests' like I could in Jupyter.  So after more research, I decided to start over with PowerShell 7 and VisualStudio and Python, not from the Microsoft store.  So I created new paths, but then forgot to update the code before runtime.  See, not my fault.  Smiley  All right.  Now we're having fun.  No more untested code, and more testing!

Storylog: 
1.Download No more link making using if statements, for now, more on that later.  I figured out how to use 'strftime' to format the days and months as always two digits, in a lot less lines.

2.Import 3.Export New path names.  Goal:  Start whole setup over after mastering virtual environments

5.Archive Daily runtimes are now being calculated using perf_counter and added to a csv file for later performance review.

Here is I  my production code for today:
Code:
from datetime import timedelta, date
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk
import csv

# start runtimer
tsubi = time.perf_counter()

#set dates for links and new folder
today = date.today()
tomorrow = today + timedelta(1)

# open links to download and upload images if only I could get out of the way
url1 = f"https://jastic.space/search?after_date={today.strftime("%Y")}-{today.strftime("%m")}-{today.strftime("%d")}T07%3A55%3A00&author=ChartBuddy&before_date={tomorrow.strftime("%Y")}-{tomorrow.strftime("%m")}-{tomorrow.strftime("%d")}T07%3A55%3A00"
url2 = "https://mgur.com/upload"
url3 = "https://www.alkimg.com/"
webbrowser.open(url1)
webbrowser.open(url2)
webbrowser.open(url3)

# name newfolder with date
directory = f"{today.month}-{today.day}"
parent_dir = "C:/Users/Games/Downloads"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)
print("Directory '%s' created" %directory)

# learn to scrape 24 images 1 second at a time,
# else manually download each file

# automatically open gimp, then filter to load all images
subprocess.Popen([r'C:/Program Files/GIMP 2/bin/gimp-2.10.exe'])

# export big gif press the any key then enter
print("Is big gif exported?")
input()
print("Movin' on...")

# big gif is moved
src = "C:/PyProjects/tmp/"
dest = "C:/PyProjects/GMIP/2024/2-2024/"
shutil.move("C:/PyProjects/tmp/gif.gif", dest)
rename ("C:/PyProjects/GMIP/2024/2-2024/gif.gif", f"C:/PyProjects/GMIP/2024/2-2024/b{today.month}-{today.day}.gif")

# scale image and export little gif
print("Is little gif exported?")
input()
print("Movin' on...")

# little gif is moved
src = "C:/PyProjects/tmp/"
dest = "C:/PyProjects/GMIP/2024/2-2024/"
shutil.move("C:/PyProjects/tmp/gif.gif", dest)
rename ("C:/PyProjects/GMIP/2024/2-2024/gif.gif", f"C:/PyProjects/GMIP/2024/2-2024/{today.month}-{today.day}.gif")

# ID files
src = "C:/Users/Games/Downloads"
dest = "C:/Users/Games/Downloads/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# i have a dummy file present so new downloads look like download(*).png
# only move numbered png files
for file in files:
    if os.path.isfile(file):
        if file.endswith(").png"):
            shutil.move(file, dest) 

# upload to two sites, gather links to input into console
ibg = input("imgur big gif link here")
imgtalk = input("imgtalk little gif link here")

# add post to clipboard for btctalk
r = Tk()
r.withdraw()
r.clipboard_clear()
r.clipboard_append(f"ChartBuddy's 24 hour Wall Observation recap\n[url={ibg}].{imgtalk}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
r.update()

#this holds the post on the clipboard until posted
print("All done?")
input()

#runtime is calculated
tsubf = time.perf_counter()
runtime = {tsubf - tsubi}

# save to csv file
f = open('C:/PyProjects/runtimes.csv', 'a', newline='')
writer = csv.writer(f)
writer.writerow(runtime)

So what was I talking about earlier?  How using strftime uses a lot less lines as stringing the if statements together.  But after figuring out how to do runtimes, I figured we could do some science!

So I made 10000 links using following methods.   I figured out, finally, how to feed a fake date and run it through using mostly the following
Code:
# start range
for i in range(0, 10000):

    # start runtimer
    tsubi = time.perf_counter()

    # make fake datetime
    fake = Faker()
    tod = fake.future_datetime('+50y', None)
    tom = tod + timedelta(days = 1)
I'd love to share the rest if anyone wants them. So now I could get a sampling of dates that are going through various levels of if statements.  So here are the results:

if statements, with random date:  26.7698 ms per link created
strftime, with imported date: 0.0158472 ms per link created

Wow, total destruction for the if method.  Or is it?  We might need more data.

Hold on I'm going to go run this code, and then I'll post the runtime. 

Sweet the runtime was: crash and burn, okay, okay, but for real this time it was all my fault.  See, when you try to test your code you need dummy files to be moved around, and if those dummy files don't get deleted before the next run then python says, "Hey dummy, since your dummy files are there it would be quite rude of me to just copy over them so I'm going to set this bird down as gently as possible."  It would have worked, I know this because I did delete the dummy files, change the web addresses to avoid reloading those, re ran the script with the already downloaded files and away we went.  Let's see what that unofficial runtime was, well I also forget to click the last enter to stop the timer so it came out to 600.84 s.  Seems to be working though.   Sweet.  Till next time, same python time, same python channel.

sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Brilliant crash and burn on the foolproof forever solved link creation procedure!
And the timer didn't work either.  The pizza was really tasty though. 

I blame stepping away from the project and being unclear on remembering the problem, namely not needing a single digit day, but being able to deal with a single digit return from datetime.  Which is why i thought the one test i did run in the morning was a success.

Here's the locked in, guaranteed code for sure.  I'll leave it to the reader to spot the changes.  I did clean up a bit after realizing it will never be a new month when the day is below 10. Smiley

Relevant codefix below
Code:
# set dates for link range and newf
tod = datetime.date.today()
tom = tod + timedelta(days = 1)

# check dates and create link
if tod.day < 9 and tod.month <= 9:
    # create link for days 1-8 and month 1-9 formatted like this days 01-08 and month 01-09
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
        return link

if tod.day < 9 and tod.month >= 10:
    # create link for days 1-8 and month 10,11,12
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-0{tom.day}T07%3A55%3A00"
        return link

if tod.day == 9 and tod.month <= 9:
    # create link for day 9 with tommorow 10 and month 1-9
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
        return link

if tod.day == 9 and tod.month >= 10:
    # create link for day 9 with tommorow 10 and month 10,11,12
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
        return link
   
if tod.day > 9 and tod.month <= 9 and tod.month == tom.month:
    # create link for days 1-8 and month 1-9
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
        return link

if tod.day > 9 and tod.month >= 10 and tod.month == tom.month:
    # create link for days 1-8 and month 10,11,12
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
        return link

if tod.month <= 8 and tod.month != tom.month:
    # create link for last day of months 1-8
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
        return link
     
if tod.month == 9 and tod.month != tom.month:
    # create link for last day of month 9
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
        return link
     
if (tod.month == 10 or tod.month == 11) and tod.month != tom.month:
    # create link for last day of month 10,11
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
        return link
     
if tod.year != tom.year:
    # create link for last day of the year
    def CBuddy():
        link = f"https://jastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tom.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
        return link

# test on nonexistent site
url1 = CBuddy()

webbrowser.open(url1)

Great progress OP, I am personally following your coding progress and I am very happy that you are trying to inspire others and meanwhile logging it for yourself on this platform.
One day, you will get back to this post and wonder how great you could have made this same application and spending way less time but it's all great. I am glad I could help you a bit in it.
It already looks much cleaner and better than the original.

I have a few questions about your post, but thanks!  I also wonder how long it will take to know enough to code it from memory, or are you talking improving on the runtime efficiency?  I'm really excited to reach that stage.  It's like in the game https://store.steampowered.com/app/375820/Human_Resource_Machine/ you get a different score for how much you can limit the number of different commands you use, and you also get a score for using fewer lines of code for your program to complete the task.
copper member
Activity: 1498
Merit: 1619
Bitcoin Bottom was at $15.4k
Great progress OP, I am personally following your coding progress and I am very happy that you are trying to inspire others and meanwhile logging it for yourself on this platform.
One day, you will get back to this post and wonder how great you could have made this same application and spending way less time but it's all great. I am glad I could help you a bit in it.
It already looks much cleaner and better than the original.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Progress update.  Things are really getting streamlined now, and I also had to do some fixing of my link building process.  

Storylog:
1.Download This one was some fun. Attempting to account for sometimes needing a single digit day and sometimes a double digit day to build that day's ninjastic space link seemed simple enough.  Just check if it's a single or double digit day, and then build the right link.  But I remembered what happened last night, when because the link contains a range of consecutive days, sometimes in the same link it needs to be both single and double digit.  Well that only happens on the 9th day of the month, so that shouldn't be too tough.  But now I was puzzled about the last day of the month, and how I would need to know what that date was to make another exception, and then I remembered this video from Tom Scott The Problem with Time & Timezones - Computerphile, and thought maybe 'if else's' wasn't the brightest way to solve this problem.   Grin

I figured out how to deal with the changing months seeing how I could just add a second 'and' to check if today's month is the same as tomorrow's month, and then copied the needed code for when it was going to be a different month tomorrow.  I stuck with it, and hopefully I'm paying myself by the line, because I cluttered my code with 8 ifs, 11 ands, 1 or, and 1 else, I think I'm good with link making until we switch calendars again, only because the link isn't affected by time zones, leap days, or daylight savings, as far as I can tell.  

I better go triple check after a claim like that...I think I've found at least 5 things that would have gone wrong since typing that.  It worked the one time I tested it earlier for today's date.  Let's find out!  It's the first part of my ever bloating code if you wanted to find the 6th thing.   Wink

I'm thinking a more efficient way to investigate might be learning how to use arrays, and functions to call arrays.

2.Import  Changed script so first file loaded is an .xcf file (GIMP's file extension for a work in progress) and not a .png file so the first image in the gif will retain its frame time, which is longer than the rest.  Created keyboard shortcuts for plug-ins used to import and export files.  Thinking about: as the layers or hours are loaded, place some sort of timekeeping method, I'm thinking relative to midnight UTC, on each frame.  Maybe appearing or dissapearing dots

4.Export  Changed script to include line breaks for automatic post formatting.

Code:
import datetime
from datetime import timedelta
import time
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk

tsubi = time.perf_counter()

# set dates for link range and newf
tod = datetime.date.today()
tom = tod + timedelta(days = 1)

# check dates and create link
if tod.day < 9 and tod.month <= 9 and tod.month == tom.month:
    # create link for days 1-8 and month 1-9
    def CBuddy():
        link = f"https://ninjastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
        return link

if tod.day < 9 and tod.month >= 10 and tod.month == tom.month:
    # create link for days 1-8 and month 10,11,12
    def CBuddy():
        link = f"https://ninjastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-0{tom.day}T07%3A55%3A00"
        return link

if tod.day == 9 and tod.month <= 9 and tod.month == tom.month:
    # create link for day 9 with tommorow 10 and month 1-9
    def CBuddy():
        link = f"https://ninjastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-{tom.day}T07%3A55%3A00"
        return link

if tod.day == 9 and tod.month >= 10 and tod.month == tom.month:
    # create link for day 9 with tommorow 10 and month 10,11,12
    def CBuddy():
        link = f"https://ninjastic.space/search?after_date={tod.year}-{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
        return link

if tod.month <= 8 and tod.month != tom.month:
    # create link for last day of months 1-8
    def CBuddy():
        link = f"https://ninjastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
        return link
    
if tod.month == 9 and tod.month != tom.month:
    # create link for last day of month 9
    def CBuddy():
        link = f"https://ninjastic.space/search?after_date={tod.year}-0{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
        return link
    
if (tod.month == 10 or tod.month == 11) and tod.month != tom.month:
    # create link for last day of month 10,11
    def CBuddy():
        link = f"https://ninjastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tom.month}-0{tom.day}T07%3A55%3A00"
        return link
    
if tod.year != tom.year:
    # create link for last day of the year
    def CBuddy():
        link = f"https://ninjastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tom.year}-0{tom.month}-0{tom.day}T07%3A55%3A00"
        return link

else:
    #create double digit links
    def CBuddy():
        link = f"https://ninjastic.space/search?after_date={tod.year}-{tod.month}-{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-{tod.month}-{tom.day}T07%3A55%3A00"
        return link

# open links to download and upload images if only I could get out of the way
url1 = CBuddy()
url2 = "https://imgur.com/upload"
url3 = "https://www.talkimg.com/"
webbrowser.open(url1)
webbrowser.open(url2)
webbrowser.open(url3)

# name newfolder with date
directory = f"{tod.month}-{tod.day}"
parent_dir = "C:/Users/Games/Downloads"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)
print("Directory '%s' created" %directory)

# learn to scrape 24 images 1 second at a time,
# else manually download each file

# automatically open gimp, then filter to load all images
subprocess.Popen([r'C:/Program Files/GIMP 2/bin/gimp-2.10.exe'])

# export big gif press the any key then enter
print("Is big gif exported?")
input()
print("Movin' on...")

# move big gif
src = "C:/Users/Games/tmp/"
dest = "C:/Users/Games/Desktop/GMIP/2024/2-2024/"
shutil.move("C:/Users/Games/tmp/gif.gif", dest)
rename ("C:/Users/Games/Desktop/GMIP/2024/2-2024/gif.gif", f"C:/Users/Games/Desktop/GMIP/2024/2-2024/b{tod.month}-{tod.day}.gif")

# scale image and export little gif
print("Is little gif exported?")
input()
print("Movin' on...")

# move little gif
src = "C:/Users/Games/tmp/"
dest = "C:/Users/Games/Desktop/GMIP/2024/2-2024/"
shutil.move("C:/Users/Games/tmp/gif.gif", dest)
rename ("C:/Users/Games/Desktop/GMIP/2024/2-2024/gif.gif", f"C:/Users/Games/Desktop/GMIP/2024/2-2024/{tod.month}-{tod.day}.gif")

# ID files
src = "C:/Users/Games/Downloads"
dest = "C:/Users/Games/Downloads/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# i have a dummy file present so new downloads look like download(*).png
# only move numbered png files
for file in files:
    if os.path.isfile(file):
        if file.endswith(").png"):
            shutil.move(file, dest)  

print("Files moved.")

# upload to two sites, gather links to input into console
ibg = input("imgur big gif link here")
imgtalk = input("imgtalk little gif link here")

# add post to clipboard for btctalk
r = Tk()
r.withdraw()
r.clipboard_clear()
r.clipboard_append(f"ChartBuddy's 24 hour Wall Observation recap\n[url={ibg}].{imgtalk}.[/url]\nAll Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
r.update()

#this holds the post on the clipboard until posted
print("All done?")
input()

#runtime is calculated and printed
tsubf = time.perf_counter
print(f"This time it took {tsubf - tsubi:0.4f} seconds")

#save this for later to append a file
runtime = tsubf - tsubi

Next steps:

1.Download: Learn to automate downloading certain images from bct
2.Import: Run GIMP and scripts unattended
3.Export: Learn about imgur's API
4.Post: Learn to auto post on bct
5.Archive: combine a months worth of images

I should put in a timer that keeps track of how long each recap takes.   Let's do that five minutes before posting this, nothing like last minute, untested coding to make things work.  Smiley

Pizza time!

Edit: fixed GMIP typo, grammar
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Hi there,

~snip~

I hope this will help you.

Wow.  Thank you for those kind words and advice.  Your code looks a lot cleaner for sure.  Smiley
I like the idea of setting variables and paths at the top.  I'm looking forward to knowing more about how to write my own code instead of being able to read it enough to fix the bugs that keep popping up, when I try to integrate another web search snippet.  Maybe I'll be able to by the time GIMP 3.0 is out and I have to rework those functions.  
The answer is probably to write another function, but if I use the date in more than one place would the example you provided work?  I should probably try it myself and see.  

Oh, and I said I didn't straight copy paste anything, I did download a blank gimp-fu plugin template as seen above, because I didn't get all the blurbs and such, but I understand more what those do a bit, and have retyped that plugin, with my own blurbs.

Changelog:
3.Export  I mentioned the improvement in exporting, naming, and storing the gifs, last post.  Now GIMP opens without a click.  Goal: after opening, perform the first filter, but I should probably spend efforts trying to learn to make it run unattended.

4.Post  Now instead of digging for the text of the post in drafts, I saved the text of the post and place it on the clipboard, only a ctrl-v away, until the script ends, which is the reason for the final input. It also now opens both image hosting sites. I just figured out how to have 2 popups ask for the gif urls and auto place them in the text of the post, before being placed on clipboard.  Took me a minute to discover a right click in VS terminal is a paste operation Goal: Have it pull the links straight from the clipboard.  

I think this is what you were talking about I'mThour?  Where instead of typing open url 3 times,  I should create a function called open url and feed it the 3 urls?  Well, I'm just so excited it all worked according to plan.  Thanks again!

Code:
import datetime
from datetime import timedelta
import shutil
import os
from os import rename
import webbrowser
import subprocess
from tkinter import Tk

# set dates for link range and newf
tod = datetime.date.today()
tom = tod + timedelta(days = 1)

# create link
def CBuddy():
    link = f"https://ninjastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
    return link
print(CBuddy())

# open links to download and upload images if only I could get out of the way
url1 = CBuddy()
url2 = "https://imgur.com/upload"
url3 = "https://www.talkimg.com/"
webbrowser.open(url1)
webbrowser.open(url2)
webbrowser.open(url3)

# name newfolder with date
directory = f"{tod.month}-{tod.day}"
parent_dir = "C:/Users/Games/Downloads"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)
print("Directory '%s' created" %directory)

# learn to scrape 24 images 1 second at a time,
# else manually download each file

# automatically open gimp, then filter to load all images
subprocess.Popen([r'C:/Program Files/GIMP 2/bin/gimp-2.10.exe'])

# export big gif press the any key then enter
print("Is big gif exported?")
input()
print("Movin' on...")

# move big gif
src = "C:/Users/Games/tmp/"
dest = "C:/Users/Games/Desktop/GMIP/2024/2-2024/"
shutil.move("C:/Users/Games/tmp/gif.gif", dest)
rename ("C:/Users/Games/Desktop/GMIP/2024/2-2024/gif.gif", f"C:/Users/Games/Desktop/GMIP/2024/2-2024/b{tod.month}-{tod.day}.gif")

# scale image and export little gif
print("Is little gif exported?")
input()
print("Movin' on...")

# move little gif
src = "C:/Users/Games/tmp/"
dest = "C:/Users/Games/Desktop/GMIP/2024/2-2024/"
shutil.move("C:/Users/Games/tmp/gif.gif", dest)
rename ("C:/Users/Games/Desktop/GMIP/2024/2-2024/gif.gif", f"C:/Users/Games/Desktop/GMIP/2024/2-2024/{tod.month}-{tod.day}.gif")

# ID files
src = "C:/Users/Games/Downloads"
dest = "C:/Users/Games/Downloads/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# i have a dummy file present so new downloads look like download(*).png
# only move numbered png files
for file in files:
    if os.path.isfile(file):
        if file.endswith(").png"):
            shutil.move(file, dest)  

print("Files moved.")

# upload to two sites, gather links to input into console
ibg = input("imgur big gif link here")
imgtalk = input("imgtalk little gif link here")

# add post to clipboard for btctalk
r = Tk()
r.withdraw()
r.clipboard_clear()
r.clipboard_append(f"ChartBuddy's 24 hour Wall Observation recap [url ={ibg}].{imgtalk}.[/url] All Credit to [url=https://bitcointalk.org/index.php?topic=178336.msg10084622#msg10084622]ChartBuddy[/url]")
r.update()

#this holds the post on the clipboard
print("All done?")
input()

Edit: spelling, phrasing
copper member
Activity: 1498
Merit: 1619
Bitcoin Bottom was at $15.4k
Hi there,

I can see you are really dedicated towards your code and as a Programmer, I would like to contribute something to your code.

1. Start using functions so that you can call it without re-writing the same code again. For example, I made a create_link function.

Code:
def create_link():
    today = datetime.date.today()
    tomorrow = today + timedelta(days=1)
    link = f"https://jastic.space/search?after_date={today.year}-0{today.month}-0{today.day}T07%3A55%3A00&author=ChartBuddy&before_date={today.year}-0{today.month}-0{tomorrow.day}T07%3A55%3A00"
    return link

2. Try to declare constants for PATHs in the start of the code after imports.

Code:
DOWNLOADS_DIR = "C:/Users/Games/Downloads"
TMP_DIR = "C:/Users/Games/tmp"
DESKTOP_DIR = "C:/Users/Games/Desktop/GMIP/2024/2-2024"

I hope this will help you.
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Big Progress indeed,  Huh things broke yesterday with the GIMP script.  Can you see why?  Here is the current import script
Code:
#!/usr/bin/python

import sys, os, re, traceback
from collections import namedtuple
from gimpfu import *
from pdb import *

def plugin_loadlay(image, drawable):
    
    image = pdb.gimp_file_load("C:/Users/Games/download (0).png", "/download (0).png")
    display = pdb.gimp_display_new(image)

    for items in range(1,25):
        location = r"C:/Users/Games/Downloads/download ({}).png".format(items)
        layer = pdb.gimp_file_load_layer(image, location)
        pdb.gimp_image_insert_layer(image, layer, None, -1)

register(
        "python-fu-loadlay",
        "blurb: Here is the first text",
        "help: Here is the help text",
        "author: My name",
        "copyright: My company",
        "date: 2020",
        "/Filters/loadlay",
        "",
        [
        
            
            
            
        ],
        [],
        plugin_loadlay)

main()

It seems I keep plugging holes, when I should be fixing the dam so to speak.  For example I have the script to export the gif from gimp in the form of gif.gif
Instead of figuring out how to use datetime to name the gif (i tried) i'm just moving the gif.gif and then renaming it using python, while waiting until I manually scale and export the talkimg sized gif.

Code:
import datetime
from datetime import timedelta
import shutil
import os
from os import rename
import webbrowser

# set dates for link range and newf
tod = datetime.date.today()
tom = tod + timedelta(days = 1)

# create link
def CBuddy():
    link = f"https://jastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
    return link
print(CBuddy())

# open link
url = CBuddy()
webbrowser.open_new_tab(url)

# name newfolder with date
directory = f"{tod.month}-{tod.day}"
parent_dir = "C:/Users/Games/Downloads"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)
print("Directory '%s' created" %directory)

# learn to scrape 24 images 1 second at a time,
# else manually download each file, use gimp filter to load all images,
# export big gif press the any key then enter
print("Is big gif exported?")
input()
print("Movin' on...")

#move big gif
src = "C:/Users/Games/tmp/"
dest = "C:/Users/Games/Desktop/GMIP/2024/2-2024/"
shutil.move("C:/Users/Games/tmp/gif.gif", dest)
rename ("C:/Users/Games/Desktop/GMIP/2024/2-2024/gif.gif", f"C:/Users/Games/Desktop/GMIP/2024/2-2024/b{tod.month}-{tod.day}.gif")

#scale image and export little gif
print("Is little gif exported?")
input()
print("Movin' on...")

#move little gif
src = "C:/Users/Games/tmp/"
dest = "C:/Users/Games/Desktop/GMIP/2024/2-2024/"
shutil.move("C:/Users/Games/tmp/gif.gif", dest)
rename ("C:/Users/Games/Desktop/GMIP/2024/2-2024/gif.gif", f"C:/Users/Games/Desktop/GMIP/2024/2-2024/{tod.month}-{tod.day}.gif")


# ID files
src = "C:/Users/Games/Downloads"
dest = "C:/Users/Games/Downloads/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

# i have a dummy file present so new downloads look like download(*).png
# only move numbered png files
for file in files:
    if os.path.isfile(file):
        if file.endswith(").png"):
            shutil.move(file, dest)  

print("Files moved.")

# open gimp, export two gifs
# upload to two sites, gather links
# post to btctalk

Next steps:  Change my GIMP scripts to run unattended, learning to write a batch command to eventually string all these steps together.

Hope!

sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Big progress!

Here's the code (being a single digit month, and day) I've sewn together so far.  I read what I thought looked like a great tip.  Write your comments first on what you want that part of the code to do.  And while I could explain pretty well what each section is doing, there is no way just by looking at my comments could I come up with the code off the top of my head, but some of it like setting variables...well that's about the only thing.  Well i can run an if loop if I remember the colon after the if statement.  but it will do no more than print something.

Code:
from datetime import timedelta
import datetime
import shutil
import os
import webbrowser

#set dates for link range and newf
tod = datetime.date.today()
tom = tod + timedelta(days = 1)

#create link
def CBuddy():
    link = f"https://ninjastic.space/search?after_date={tod.year}-0{tod.month}-0{tod.day}T07%3A55%3A00&author=ChartBuddy&before_date={tod.year}-0{tod.month}-0{tom.day}T07%3A55%3A00"
    return link
print(CBuddy())

#open link
url = CBuddy()
webbrowser.open_new_tab(url)

#name newfolder with date
directory = f"{tod.month}-{tod.day}"
parent_dir = "C:/Users/Games/Downloads"
newf = os.path.join(parent_dir, directory)
os.mkdir(newf)
print("Directory '%s' created" %directory)

#learn to scrape 24 images 1 second at a time,
#else manually download each file, press the any key then enter
print("Are files downloaded?")
input()
print("Movin' on...")

#ID files
src = "C:/Users/Games/Downloads"
dest = "C:/Users/Games/Downloads/{}".format(directory)
files = os.listdir(src)
os.chdir(src)

#i have a dummy file present so new downloads look like download(*).png
#only move numbered png files
for file in files:
    if os.path.isfile(file):
        if file.endswith(").png"):
            shutil.move(file, dest)  

print("Files moved.")

#open gimp, export two gifs
#upload to two sites, gather links
#post to btctalk

Changelog:
1.Download  The ninjastic space link now auto opens

2.Import Saving me just a few seconds, but should be helpful if I fully automate the process, I wrote a GIMP plugin to load the base image, and then to load each layer in order automatically.  So I only have to open GIMP and click this script.  Before I would drag the base layer desktop icon onto the GIMP icon, opening it, and then highlight the day's downloads and drag them in and then run a plug in I found to reverse all the layers   Here's that code that places the next hour after in the gif, or before in the layers by giving the new layer a layer value of -1.
Code:
import sys, os, re, traceback
from collections import namedtuple
from gimpfu import *
from pdb import *

def plugin_loadlay(image, drawable):
    
    image = pdb.gimp_file_load("C:/Users/Games/Downloads/download (0).png", "/download (0).png")
    display = pdb.gimp_display_new(image)

    for items in range(1,25):
        location = r"C:/Users/Games/Downloads/2-3/download ({}).png".format(items)
        layer = pdb.gimp_file_load_layer(image, location)
        pdb.gimp_image_insert_layer(image, layer, None, -1)

register(
        "python-fu-loadlay",
        "blurb: Here is the first text",
        "help: Here is the help text",
        "author: My name",
        "copyright: My company",
        "date: 2020",
        "/Filters/loadlay",
        "",
        [
        
            
            
            
        ],
        [],
        plugin_loadlay)

I kept using f to modify the image number string like i did with the date, but I have so much more to learn about how to treat strings and variables.  And I think a string is a type of variable along with integers, Booleans, and complex where j is i.  I think I remember reading that.   Well the last time I typed this much while coding I thought i deleted my whole download folder, because it suddenly became inaccessible, but i really only moved it to the root drive which took about 12 hours.  Don't test on your actual data people!  Close one!  Grin

Next steps:  Change my GIMP scripts to run unattended, learning to write a batch command to eventually string all these steps together, Cheers!

EDIT: missing punctuation
sr. member
Activity: 114
Merit: 93
Fly free sweet Mango.
Backstory:  In the Wall Observation Thread there is a great bot, ChartBuddy, coded by Richy_T that posts the bitcoin bid and ask walls from the bitstamp exchange, on an hourly basis.  A few months ago, a WO regular, OgNasty, asked "Has anybody created some sort of animated gif using the latest chart buddy images? It might be funny to use the last 24 images on a rolling basis to give you a feel for how the last day has gone. Or maybe grab 24 images once a day and then be able to show each day’s movement. Seems like something one of you fellas would enjoy creating."

Having previously made animated GIF files from image layers in GIMP, I knew I was up for at least the second request.
Here are the steps I iterated each day at the beginning.

1. Scroll through the day and download each ChartBuddy post.  
2. Drag each downloaded image into GIMP as a new layer
3. Export full size gif to imgur for the clickable link, and export optimized gif for the in-thread talkimg hosted one
4. Put together the post and post
5. Archive images for later use in a monthly replay

Even though it was only taking about 5-10 minutes, after a few days i desired to automate, or at least streamline, some of those steps.  I've done some 'fake' programming in games like, TIS100, SpaceChem, and Human Resource Machine.  Well I do have some experience with BASIC and LOGO from back in the day, but now I wanted to learn a little Python.  The first process I wanted to automate was the downloading of the previous day's ChartBuddy images.  Off to YouTube to do some research!  It turns out to be one of the last things it seems like i'm going to be able to do.  These dang electric winged minions keep bringing back bowls of soup and I want downloads.  More on that later.

The first streamline I learned rather quickly was that I could drag all 24 images in GIMP at once, and they would stay in order.  That was a nice one to find first.  Also helpful how Windows appends numbers to identically named files as you download them.  Then I figured out that if I have a 'dummy' download.png file present, the downloads would be perfectly numbered from 1 to 24.  Sweet.

I'm posting this to possibly provide a chuckle from my mistakes, and maybe even encourage other people to learn how to code.  I've been having some frustration, but it's been a ton of fun along the way. It has to be fun if they make fun games about it, right?   Full disclosure: I have typed snippets of a lot of other people's code, never copy and pasted other's code, and generally done a bunch of problem solving.  Along the way I've started to learn a little about Python.  The little game I've been playing is I get more points for getting the answer by searching from Brave browser, medium points for going to Google, and then the least points from asking Copilot.  What an amazing resource that Copilot is. Here's my current status mostly chronologically marking my progress. All the while, I keep giving BeautifulSoup another go.  I think the furthest I've gotten on that front is to have it return all the links on a thread page, and I can see ChartBuddy's user id number, but can't figure out how to sift through it with code, yet.  

1.Download  I've discovered Ninjastic Space and how I can get all of ChartBuddy's posts from the last 24 hours on one page, and more easily right click save them.  I made sure it was okay with TryNinja the site operator to do so.  
   Then, knowing that the search parameters were in the ninjastic space website address for the returned results page, i could just make the link myself each day, by editing a few numbers using notepad
Code:
https://ninjastic.space/search?after_date=2024-01-27T07%3A55%3A00&author=ChartBuddy&before_date=2024-01-28T07%3A55%3A00
  See the user, date and time in there?
   My next progress is where I first used python, to make the link for me for that day. An example of a problem that i still need to work on: The ninjastic space website puts zeros in front of single digit days and months in the search result's url which isn't how I'm getting them from datetime in python.  So I've been manually putting the zero in the code when needed.  I know there is a bunch of ways to format the time, but I'm looking to maybe use this problem as an 'if else' learning exercise for now.

5.Archive I learned how to have python make a folder, named after the date, and move all of that days images into it, no more download(25).pngs for me after forgetting to move yesterday's images.  I strung together the link making code and folder making code, and put an input in the middle so it waits for me to make the gifs before archiving the layers.   What a world!

4.Post  Before I knew about ninjastic space, to speed up inputting the actual post, I would go to the first image for that day's gif, in the WO Observation thread, duplicate that tab.  I would use one tab so I could reply to my last gif, delete the quote code, and then just copy in the new urls, after using the other tab to scroll through the day downloading each ChartBuddy post as I went.  I currently use the draft feature to get the post, but now I see how I can have python write the post for me. I wonder if it can put the post directly on the clipboard for me?

At this point, the process takes about 5-10 minutes, but let's keep going

3.Export  This one took awhile, spending a few days on trying to wrap my head around GIMP plugins to save a gif, this one I had to go to Microsoft Copilot and it calmly told me that the plugin text from the GIMP python console was wrong about file_gif_sav2, and it gave me the correct, newer Parameters to use.  Incredible, what a time to be alive!

Next goals:  Auto importing the layers into GIMP, especially for the monthly version, and having python write the post, accepting the links as inputs.  Let me know if you have any questions I might be able to answer.  Smiley

EDIT: added details about search parameters being in the webaddress, grammar, typo, changed a few to 5-10
Jump to:
© 2020, Bitcointalksearch.org