updating to just python and venv

master
Steven Saus 5 years ago
parent 11779ebc0d
commit 398bb2e794
  1. 176
      .gitignore
  2. 25
      README.md
  3. 137
      README2.md
  4. 270
      parse3.sh
  5. 2
      python_needs.txt
  6. 21
      rss_social.ini
  7. 8
      rss_social.rc
  8. 6
      rss_social_feeds.rc
  9. 82
      run_elinks.py
  10. 85
      send.sh
  11. 2
      send2.py
  12. 21
      urlencode.sh

176
.gitignore vendored

@ -0,0 +1,176 @@
# reference, old shit
1_reference/
# macOS Junks
.DS_Store
# VSCode Junks
.vscode
.vscode/*
# Microsoft Office Junks
~$*.*
# C/C++ Junks
# Prerequisites
*.d
# Compiled Object files
*.slo
*.lo
*.o
*.obj
# Precompiled Headers
*.gch
*.pch
# Compiled Dynamic libraries
*.so
*.dylib
*.dll
# Fortran module files
*.mod
*.smod
# Compiled Static libraries
*.lai
*.la
*.a
*.lib
# Executables
*.exe
*.out
*.app
# Python Junks
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
share/
bin/
bin/*
include/
include/*
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don’t work, or not
# install all needed dependencies.
Pipfile.lock
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/

@ -13,31 +13,31 @@ and [shaarli](https://github.com/shaarli/Shaarli) instances. Will
probably work fine with other well-formed RSS/Atom feeds. *Probably.*
* TODO - to make twitter thing not have API keys in open
* TODO - work with less standardized RSS, duh.
* TODO - Have the tags from shaarli be used for content warning text
* TODO - check global cw
* TODO - add tags from feeds instead of just having them be CW
* TODO - Actually get the FB and G+ posting working automatically
* TODO - test the sending
# Requirements
* [xml2](http://manpages.ubuntu.com/manpages/xenial/man1/2csv.1.html)
* python3
* [toot](https://github.com/ihabunek/toot/)
* [twython](https://github.com/ryanmcgrath/twython)
* [twython-tools](https://github.com/adversary-org/twython-tools) - **IMPORTANT- SEE BELOW**
* [pexpect](https://github.com/pexpect/pexpect)
AND (these are probably already installed or easily available from your package manager/distro)
* [uuidgen](https://www.systutorials.com/docs/linux/man/1-uuidgen/)
* [wget](https://www.gnu.org/software/wget/)
* [awk](http://www.gnu.org/software/gawk/manual/gawk.html)
* [grep](http://en.wikipedia.org/wiki/Grep)
* [elinks](http://elinks.or.cz/)
# Installation
Install `xml2` from your package manager, `toot` and `twython` via pip3. Use
of a virtual enviroment is encouraged for `toot`, at least, but is not required.
Install `toot` and `twython` and `wget`
configparser
feedparser
json
bs4
pprint
via pip3. Use of a virtual enviroment is encouraged for `toot`, at least, but is not required.
## Twython-tools (sort of)
@ -55,7 +55,6 @@ program `tweet-full.py` into `tweet.py`.
1.) First of all you have to copy the checkmailrc to ~/.checkmailrc and edit it. Don´t forget to change its
permissions to 600.
If you wish to see the difference, `tweet.patch` is included for you to verify
my changes to the code.
@ -81,7 +80,7 @@ By default, elinks saves cookies, so you should be good there.
## Configuration Files
These configuration files are expected to be in `$HOME/.config` .
This configuration file is expected to be in `$HOME/.config/rss_social` .
* `rss_social.rc`

@ -1,137 +0,0 @@
rss-to-toot
==================================
To take a list of RSS feeds and to send them to Mastodon, Twitter, Facebook,
and Google Pluse with images, content warnings, and sensitive image links
when those are available.
Intended for *single user* use, as personal logins, cookies, and API keys
are required.
Currently works well with feeds from [dlvr.it](https://dlvrit.com/)
and [shaarli](https://github.com/shaarli/Shaarli) instances. Will
probably work fine with other well-formed RSS/Atom feeds. *Probably.*
* TODO - to make twitter thing not have API keys in open
* TODO - check global cw
* TODO - add tags from feeds instead of just having them be CW
* TODO - Actually get the FB and G+ posting working automatically
# Requirements
* python3
* [toot](https://github.com/ihabunek/toot/)
* [twython](https://github.com/ryanmcgrath/twython)
* [twython-tools](https://github.com/adversary-org/twython-tools) - **IMPORTANT- SEE BELOW**
AND (these are probably already installed or easily available from your package manager/distro)
* [uuidgen](https://www.systutorials.com/docs/linux/man/1-uuidgen/)
# Installation
Install `toot` and `twython` and `wget`
configparser
feedparser
json
bs4
pprint
via pip3. Use of a virtual enviroment is encouraged for `toot`, at least, but is not required.
## Twython-tools (sort of)
In this archive are two files - `tweet.py` and `tweet.patch` - that require a
little explanation. I did not need the full functionality of twython-tools,
and in fact, had a bit of a problem getting the gpg encoding of my app keys
to work. Further, the functionality I *did* want, that is posting an
image to Twitter, was always *interactive* when I wanted to enter the
file on the command line.
So I (thank you Apache2 license) ripped out the authentication portions and
hardcoded them, ripped out all the interactive bits, and remade the Twython-tools
program `tweet-full.py` into `tweet.py`.
1.) First of all you have to copy the checkmailrc to ~/.checkmailrc and edit it. Don´t forget to change its
permissions to 600.
If you wish to see the difference, `tweet.patch` is included for you to verify
my changes to the code.
You must register a [Twitter application](https://apps.twitter.com) and get
**user** API codes and type them manually into `tweet.py`.
Usage is `tweet.py --message "Thing to tweet" --file /path/to/file/to/tweet`.
* Posting to Facebook, Google Plus, etc
## IMPORTANT NOTE:
These do not work great (if at all) yet; you might want to put `FALSE`
in the appropriate sections of the configuration
Due to crappy API restrictions, there isn't a real programmatic way to
post to these services. However, this is *linux* so, dammit, there is.
We are going to get around this by the use of `elinks` and the `pexpect`
python library.
Prior to use, you will need to use elinks and to log in to Facebook
(use [the mobile site](http://m.facebook.com) ) and to [the main Google page](http://www.google.com).
By default, elinks saves cookies, so you should be good there.
## Configuration Files
This configuration file is expected to be in `$HOME/.config/rss_social` .
* `rss_social.rc`
Each line has a single value, without any key. The order is important. Sane
defaults are in the example file.
* The location of the executable for `toot`
* The location of the executable for `tweet.py`
* The location of `elinks` (for posting to FB, see below)
* The location of `elinks` (for posting to G+, see below)
* The location of the list of feeds
* The location for the cache of urls
* How many articles to send per run
* The location of the URL encoder. One is included for convenience
If you do not wish to post to any service, put a FALSE on the appropriate
line in the configuration file.
* `rss_social_feeds.rc`
This file is simply parsed from the top to the bottom. Only three line beginnings
matter here:
* @CON=Content Warning Text
* @SEN
* @FEED=http://link.to.feed
The first two are entirely option (and separately toggleable, as seen in the
example). If @SEN exists, all images processed from that feed will have the
"sensitive" tag on Mastodon. If @CON exists, everything after the equals sign
will be the content warning descriptor for Mastodon. (See the example file).
Finally, any line starting with @FEED should be an ATOM/RSS feed which will be
processed.
# Usage
* Run `parse3.sh` on a semi-regular basis (e.g. a cron job)
* Run `send.sh` on a semi-regular basis (e.g. a cron job)
* Profit.
Items are sent on a first _published_ first out, so slapping as many feeds
into this as you like will still result in the first published articles
being shared first.
# Credits and where a lot of this started
[https://www.linuxjournal.com/content/parsing-rss-news-feed-bash-script]
[https://linux-tips.com/t/expand-shortened-urls-with-gnu-wget/376]
[https://www.hyperborea.org/journal/2017/12/mastodon-ifttt/]
[https://github.com/poga/rss2mastodon]
[https://github.com/blind-coder/rsstootalizer]
[https://twitrss.me/]
[https://gist.github.com/cdown/1163649]

@ -1,270 +0,0 @@
#!/bin/bash
########################################################################
# Init
########################################################################
initialize () {
COMPOSING=0
SENSITIVE=0
CONTENTWARNING=""
if [ -f "$HOME/.config/rss_social.rc" ];then
readarray -t line < "$HOME/.config/rss_social.rc"
TOOTCLI=${line[0]}
TWEETCLI=${line[1]}
FBCLI=${line[2]}
GPLUSCLI=${line[3]}
RSSFEEDS=${line[4]}
CACHEDIR=${line[5]}
SENDNUM=${line[6]}
ENCODER=${line[7]}
else
echo "Configuration file not set up properly."
exit
fi
if [ ! -d "$CACHEDIR" ];then
mkdir -p "$CACHEDIR"
if [ ! -f "$CACHEFILE" ];then
echo "" > "$CACHEFILE"
fi
fi
CACHEFILE=$(echo "$CACHEDIR/urls")
if [ ! -f "$CACHEFILE" ];then
echo "" > "$CACHEFILE"
fi
TEMPDIR="$CACHEDIR/tempfiles"
if [ ! -d "$TEMPDIR" ];then
mkdir -p "$TEMPDIR"
fi
TEMPRSS=$(echo "$TEMPDIR/temprss.txt")
echo "" > "$TEMPRSS"
}
########################################################################
# Expand all shortened urls
########################################################################
expand() {
resulturl=""
resulturl=$(wget -O- --server-response $testurl 2>&1 | grep "^Location" | tail -1 | awk -F ' ' '{print $2}')
if [ -z "$resulturl" ]; then
resulturl=$(echo "$testurl")
fi
}
########################################################################
# Get the image from the RSS feed
########################################################################
getimg() {
read
wget -qO "$TEMPIMG" "$url"
if [ "$?" -gt 0 ];then
#error getting image
TEMPIMG=""
fi
}
########################################################################
# Post each item to respective services
########################################################################
postit() {
if grep -Fxq "$PERMLINK" "$CACHEFILE"
then
#echo "ERROR: $PERMLINK"
echo "§ Already sent: $PERMLINK"
else
echo "§ Setting up posting for $PERMLINK"
echo "$PERMLINK" >> "$CACHEFILE"
# Remember sensitive and CW. CW is a string
#Caching the result and image
uuid=$(uuidgen -r)
bob=$(echo "$TDSTAMP-$uuid")
ThisPostDir="$CACHEDIR/$bob"
mkdir "$ThisPostDir"
ToEncodeString=$(echo "$ENCODER $PERMLINK")
EncodedUrl=$($ENCODER "$PERMLINK")
if [ -z "$TEMPIMG" ];then
# post without image
poststring=$(echo "$TITLE $PERMLINK")
tweetstring=$(printf " --message \"%s %s\"" "$TITLE" "$PERMLINK")
if [ ! -z "$CONTENTWARNING" ];then
tootstring=$(printf "post --spoiler-text \"%s\" \"%s %s\"" "$CONTENTWARNING" "$TITLE" "$PERMLINK")
else
tootstring=$(printf "post \"%s %s\"" "$TITLE" "$PERMLINK")
fi
#not sure if -remote will work with pexpect
fbstring=$(printf " -c auto-submit -u https://www.facebook.com/sharer/sharer.php?u=%s" "$EncodedUrl")
gplusstring=$(printf " -c auto-submit -u https://plus.google.com/share?url=%s" "$EncodedUrl")
else
# post with image
imgname=$(basename "$TEMPIMG")
cpstring="$TEMPIMG $ThisPostDir"
out=$(eval cp "$cpstring")
#rewriting the variable so I don't have to find it later.
TEMPIMG2=$(echo "$ThisPostDir/$imgname")
poststring=$(echo "$TITLE $PERMLINK $TEMPIMG")
tweetstring=$(printf " --message \"%s %s\" --file %s" "$TITLE" "$PERMLINK" "$TEMPIMG2")
if [ ! -z "$CONTENTWARNING" ];then
tootstring=$(printf "post --spoiler-text \"%s\" \"%s %s\" --media %s" "$CONTENTWARNING" "$TITLE" "$PERMLINK" "$TEMPIMG2")
else
tootstring=$(printf "post \"%s %s\" --media %s" "$TITLE" "$PERMLINK" "$TEMPIMG2")
fi
if [ "$SENSITIVE" == 1 ];then
tootstring=$(printf "%s --sensitive" "$tootstring")
fi
fbstring=$(printf " -auto-submit https://www.facebook.com/sharer/sharer.php?u=%s" "$EncodedUrl")
gplusstring=$(printf " -auto-submit https://plus.google.com/share?url=%s" "$EncodedUrl")
fi
# echo "WOULD POST::"
# echo "$tweetstring"
# echo "$tootstring"
# echo "$fbstring"
# echo "$gplusstring"
ThisPostText=$(echo "$ThisPostDir/posting.txt")
touch "$ThisPostText"
if [ "$TOOTCLI" != "FALSE" ];then
echo "$tootstring" >> "$ThisPostText"
fi
if [ "$TWEETCLI" != "FALSE" ];then
echo "$tweetstring" >> "$ThisPostText"
fi
if [ "$FBCLI" != "FALSE" ];then
echo "$fbstring" >> "$ThisPostText"
fi
if [ "$GPLUSCLI" != "FALSE" ];then
echo "$gplusstring" >> "$ThisPostText"
fi
# Little bit of cleaning up here...
if [ -f "$TEMPIMG" ];then
rm "$TEMPIMG"
fi
sleep 2 #to make sure our dirnames are different
read
fi
}
########################################################################
# Parse feeds here
########################################################################
parse_feeds (){
while read -r line; do
case $line in
title* )
TITLE=$(echo "$line" | awk -F 'title=' '{print $2}' | awk -F 'http' '{print $1}')
#strip url off title if it is there
COMPOSING=1
;;
updated* )
DateSTAMP=$(echo "$line" | awk -F 'updated=' '{print $2}' | awk -F 'T' '{print $1}')
TimeSTAMP=$(echo "$line" | awk -F 'updated=' '{print $2}' | awk -F 'T' '{print $2}' | awk -F '-' '{print $1}')
TDSTAMP=$(date --date="$DateSTAMP $TimeSTAMP" +"%Y%m%d%H%M%s")
;;
link/@href* )
testurl=$(echo "$line" | awk -F 'link/@href=' '{print $2}')
expand
#strip off any _utm things and/or stupid :large things on the end
url=$(echo "$resulturl" | awk -F '?utm_' '{print $1}' | awk -F ':' '{print $1":"$2 }')
case $url in
*jpg*)
TEMPIMG="$TEMPDIR/temp.jpg"
getimg
;;
*png*)
TEMPIMG="$TEMPDIR/temp.png"
getimg
;;
*gif*)
TEMPIMG="$TEMPDIR/temp.gif"
getimg
;;
*twitter*)
echo "store, maybe useful?"
;;
*)
PERMLINK="$url"
;;
esac
;;
source/link/@href*)
testurl=$(echo "$line" | awk -F 'link/@href=' '{print $2}')
expand
#strip off any _utm things and/or stupid :large things on the end
sourceurl=$(echo "$resulturl" | awk -F '?utm_' '{print $1}' | awk -F ':' '{print $1":"$2 }' | grep -v -e "atom" -e "rss" -e "xml")
#If there's something here and not the permalink (like if it's to a tweet?)...
if [ -z "$PERMLINK" ];then
PERMLINK="$sourceurl"
fi
;;
# content is not parsed for here because it's usually html and way too long
# I can release something later for that, I guess.
/feed/entry* )
# If you're currently putting together something, you've hit the next entry
if [ $COMPOSING == 1 ];then
postit
COMPOSING=0
fi
;;
link/@rel=enclosure*)
# This is probably the end of one.... (also triggers on the last entry)
if [ $COMPOSING == 1 ];then
postit
COMPOSING=0
fi
;;
esac
done < "$TEMPRSS"
postit
COMPOSING=0
}
########################################################################
# Pull in feeds here
########################################################################
pull_feeds () {
SENSITIVE=0
CONTENTWARNING=""
while read -r line; do
case $line in
@SEN*) SENSITIVE=1
;;
# NEED TO CHECK HERE SO THAT IF SOMEONE LEAVES IT OFF...
@CON*)
CONTENTWARNING=$(echo "$line" | awk -F '@CON=' '{print $2}')
;;
@FEED*)
FEED=$(echo "$line" | awk -F '@FEED=' '{print $2}')
curl -s --max-time 10 "$FEED" | xml2 | sed 's|/feed/entry/||' > "$TEMPRSS"
#cat "$TEMPRSS"
#echo "$FEED"
#sleep 10
parse_feeds
rm "$TEMPRSS"
SENSITIVE=0
CONTENTWARNING=""
;;
*) echo "ignoring commented line" ;;
esac
done < "$RSSFEEDS"
}
########################################################################
# Main
########################################################################
initialize
pull_feeds
#Clean

@ -8,5 +8,5 @@ python requirements
appdirs https://pypi.org/project/appdirs/
configparser
beautiful soup
beautifulsoup4
feedparser

@ -7,8 +7,25 @@ ArticlesPerRun = 1
filters = politics blog sex bigot supremacist nazi climate
#leave as no if you do not have a tool for it or don't want to post
#to that site
birdposter = no
mastoposter =
#birdposter = no
#mastoposter =
[Social1]
type = mastodon
rcfile =
ContentWarning = yes
GlobalCW = Bot posted
Sensitive = no
#I dont' know about this format; default should be all feeds
Feeds = Feed1 Feed3
[Social2]
type = twitter
rcfile =
#This should be optional to add
#ContentWarning = no
#GlobalCW = no
#Sensitive = no
[Feed1]
url = https://ideatrash.net/feed

@ -1,8 +0,0 @@
/usr/bin/toot
$HOME/bin/tweet.py
/usr/bin/elinks
/usr/bin/elinks
$HOME/.config/rss_social_feeds.rc
$HOME/.cache/rss_social
1
/path/to/urlencode.sh

@ -1,6 +0,0 @@
@FEED=https://ideatrash.net/feed
@CON=News
@FEED=http://feeds2.feedburner.com/time/topstories
@SEN
@CON=NSFW
@FEED=http://www.tinynibbles.com/feed

@ -1,82 +0,0 @@
#!/usr/bin/env python
'''
PEXPECT LICENSE
This license is approved by the OSI and FSF as GPL-compatible.
http://opensource.org/licenses/isc-license.txt
Copyright (c) 2012, Noah Spurrier <noah@noah.org>
PERMISSION TO USE, COPY, MODIFY, AND/OR DISTRIBUTE THIS SOFTWARE FOR ANY
PURPOSE WITH OR WITHOUT FEE IS HEREBY GRANTED, PROVIDED THAT THE ABOVE
COPYRIGHT NOTICE AND THIS PERMISSION NOTICE APPEAR IN ALL COPIES.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
'''
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import argparse
import pexpect
import sys
import time
import datetime
KEY_UP = '\x1b[A'
KEY_DOWN = '\x1b[B'
KEY_RIGHT = '\x1b[C'
KEY_LEFT = '\x1b[D'
KEY_ESCAPE = '\x1b'
KEY_BACKSPACE = '\x7f'
KEY_ENTER = '\x1b[13'
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument('-e','--execute', action='store',dest='execute', type=str, nargs='+')
parser.add_argument('-c', '--command', action='store',dest='command', type=str, nargs='+')
parser.add_argument('-u', '--url', action='store', dest='url', type=str)
args = parser.parse_args()
#exe = str(args.execute)
#url = str(args.url)
#command = str(args.command)
#exe = args.execute
url = args.url
#command = args.command
print (url)
# Note that, for Python 3 compatibility reasons, we are using spawnu and
# importing unicode_literals (above). spawnu accepts Unicode input and
# unicode_literals makes all string literals in this script Unicode by default.
#child = pexpect.spawnu(exe + ' -' + command + ' ' + url)
child = pexpect.spawnu('./browser.sh' + ' ' + url)
child.logfile = open("/tmp/mylog", "w")
print ('Waiting for it to load...')
#child.expect ('Warning')
time.sleep(1)
child.send("\r")
child.sendline(KEY_ENTER) # "the requested fragment doesn't exist ... but it did post."
print ('quitting')
child.sendline('q')
child.sendline(KEY_ENTER)
# The rest is not strictly necessary. This just demonstrates a few functions.
# This makes sure the child is dead; although it would be killed when Python exits.
if child.isalive():
child.sendline('bye') # Try to ask ftp child to exit.
child.close()
# Print the final state of the child. Normally isalive() should be FALSE.
if child.isalive():
print('Child did not exit gracefully.')
else:
print('Child exited gracefully.')

@ -1,85 +0,0 @@
#!/bin/bash
########################################################################
# Init
########################################################################
initialize () {
TEMPFILE=$(mktemp)
TEMPDIR=$(mktemp -d)
if [ -f "$HOME/.config/rss_social.rc" ];then
readarray -t line < "$HOME/.config/rss_social.rc"
TOOTCLI=${line[0]}
TWEETCLI=${line[1]}
FBCLI=${line[2]}
GPLUSCLI=${line[3]}
RSSFEEDS=${line[4]}
CACHEDIR=${line[5]}
SENDNUM=${line[6]}
ENCODER=${line[7]}
else
echo "Configuration file not set up properly."
exit
fi
CACHEFILE=$(echo "$CACHEDIR/urls")
}
# for directories in cachedir
# get the posting.txt file (tempimg, if it exists, will be encoded)
# read the posting.txt file - first line is tweet second tood
# maybe use an array there?
# execute the programs
NUMSENT=0
# Might have an option for sorted or random later, but for right now...
# Getting the cache dirs in numerical order (e.g. first in first out)
# The first line will actually be the base dir but won't have the
# appropriate file, so it'll skip
find "$CACHEDIR" -maxdepth 1 -type d -exec echo {} \; | sort > "$TEMPFILE"
while read -r d; do
if [ -d "$d" ]; then
if [ -f "$d/posting.txt" ];then
readarray -t line < "$d/posting.txt"
ToToot=${line[0]}
ToTweet=${line[1]}
ToFB=${line[2]}
ToGPlus=${line[3]}
if [ -z "$ToToot" ];then
SocialString=$(echo "$TOOTCLI $ToToot")
output=$(eval "$SocialString")
echo "$output"
fi
if [ -z "$ToTweet" ];then
SocialString=$(echo "$TWEETTCLI $ToTweet")
output=$(eval "$SocialString")
echo "$output"
fi
if [ -z "$ToFB" ];then
SocialString=$(echo "$FBCLI $ToFB")
output=$(eval "$SocialString")
echo "$output"
fi
((NUMSENT++))
rm -rf "$d"
else
echo "Not a post file"
fi
if [ "$NUMSENT" -ge "$SENDNUM" ];then
exit
fi
fi
done < "$TEMPFILE"
}
#Clean
rm -rf "$TEMPDIR"
rm "$TEMPFILE"

@ -56,6 +56,8 @@ config = configparser.ConfigParser()
config.read(ini)
sections=config.sections()
# change this to ini where multiple loops like we do with feeds, dumbass!
mastoposter = config['DEFAULT']['mastoposter']
birdposter = config['DEFAULT']['birdposter']

@ -1,21 +0,0 @@
#!/bin/bash
########################################################################
# Url Encode snippet from https://gist.github.com/cdown/1163649
########################################################################
urlencode() {
# urlencode <string>
local length="${#1}"
for (( i = 0; i < length; i++ )); do
local c="${1:i:1}"
case $c in
[a-zA-Z0-9.~_-]) printf "$c" ;;
*) eval printf '%s' '$c' | xxd -p -c1 |
while read c; do printf '%%%s' "$c"; done ;;
esac
done
}
urlencode "$1"
Loading…
Cancel
Save