Geekery – Creeva's World 3.0 https://creeva.com My life unfolding and being told online - 1 byte of information at a time Fri, 10 Feb 2023 16:39:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://creeva.com/wp-content/uploads/2023/02/cropped-creevafavi-32x32.png Geekery – Creeva's World 3.0 https://creeva.com 32 32 How Did We End Up Here? https://creeva.com/index.php/2023/02/10/how-did-we-end-up-here/ Fri, 10 Feb 2023 05:14:28 +0000 https://creeva.com/?p=100210

A few months ago my website ran into an issue. WordPress crashed, and while I could take time to fix it – I opted not to. At first, it was “I’ll get around to it”. Then it became “do I need it”? For quite a while I wanted to take my site to a static deployment – removing the backend needs. This really was the opportunity to start on that. The problem was old data. I wanted that back.

I previously designed a comprehensive backup strategy. In retrospect, it had some gaping holes in ease of use. Digging through MySQL exports, building new development environments, and tweaking settings because the upload size was too large – gave me access once again. It was time to tackle another problem – blog rot.

Blog rot is one of those things that just comes over time. Embedded YouTube videos no longer exist. The linked images you used for posts disappeared. Decisions made for optimization cause problems decades later. Blog rot arises from all of these. It was time to do a top-to-bottom clean of the site.

This idea isn’t as pure and simple as it sounds. I started with thousands of posts. The website was a central repository of tweets, image uploads, videos, data from sites that don’t exist anymore, and general fluff. It was time after two decades to get rid of the litter and noise that I didn’t need. Unedited backups are safe, but the future site didn’t need that litter. Posts conceived with original thoughts behind them were the target.

With the first cleaning, the site was whittled down to two-thousand posts. The archive still had lots of white noise amongst the soundtrack. Unfortunately, something else turned its ugly eyes upon me. Looking through my posts, there were spelling and grammar issues galore. It wasn’t unawareness from when I wrote, it just wasn’t a concern. I wrote a stream of thought. This approach leads to many writing issues for anyone. The downside is that I wanted the writing to pure versus edited. I now have a different view on that approach.

I just stared down at the amount of work ahead. The work involved opening each of the two thousand posts and deciding which had a place in the future. The culling survivors would have to make sure they had a working featured image. If an image was missing, stock imagery was used. Each post was scanned by Grammarly. Grammarly highlighted spelling, grammar, and punctuation issues. Those issues were addressed or corrected. Writing this post, Grammarly prompts are almost non-existent compared to the legacy posts in the edit screen.

Revisiting writing for the first time in years is an interesting experience. I read a wide variety of topics including family issues, defunct web services, older political views, and general geekery. I remembered each and every post. No matter how random or insignificant, it was remembered. Detachment is beneficial though. Not everything being read was going to survive. Their ghosts haunting the Internet Archive will keep them preserved.

Future website development was still going to be done with WordPress. This meant consistency as I was going through. After reviewing, cleaning, grammar, and trimming the blog rot took about a month from beginning to end. From thousands of posts to under five hundred. It was quite the trimming the in the. I left behind a good mix of who I was and what I wrote about over the years. Posts were removed for a multitude of reasons. The largest reason was relevance, specifically relevance to me and not necessarily the site. I will admit that some of the removals were just due to the level of effort to get a post up to snuff.

During all of this content work, thoughts and plans started emerging for the backend. I’ve been trying to get away from my hosting provider for years. I keep it for a site that no one visits. Why should I be paying actual money to keep it going? Time is money – and that is the larger cost sink though. In the end timing of everything came together. My domain name was coming up for renewal. That caused me to consider now was the right time to start digging into DNS and all that entails.

Through all of this, my idea of my hosting needs has changed a bit. Originally I thought about migrating to an AWS hosting solution. My personal journaling script had been running there already. Looking at the numbers though, it was costing me more than my regular host. All I needed was a web directory to host flat files that I could point my DNS to. I ended up testing and migrating to GitHub. Using their page feature was all I needed. It also had the benefit of a free SSL certificate. It does have a downside. Anyone could download the totality of my website easier than just loading a scraper. Thankfully they will only receive the same information a web scraper would provide.

The last step of the backend was considering DNS. I ended up using Google and handing more trackable information about myself to the evil overlords. It ended up cutting my domain registration costs in half. Over the next year, I will be migrating my remaining domains over to Google also. For digital items that make no money, but have a cost – cheaper is better. The .com and .net iterations for my domain were expiring at the same time. Historically, I used the .net as a beta mirror for changes. Since I’ll be doing changes offline – that wasn’t really needed anymore. Creeva.net became a profile linking site that is also hosted on GitHub.

The amount of hours is going to save me hundreds of dollars year after year going forward. Everything I’ve been working on has ended up with tangible and feel-good benefits. In fact, this is going to be my first post going forward. I have to make sure that the RSS flows I have in place continue to function. I’m also back to if the let’s write a random post bug hits me, I’ll be able to actually publish something.

]]>
Yahoo Answers Shutting Down – Not All Data Exportable https://creeva.com/index.php/2021/04/07/yahoo-answers-shutting-down-not-all-data-exportable/ Wed, 07 Apr 2021 20:52:30 +0000 https://creeva.com/?p=96124

This week it was announced that Yahoo Answers was shutting down. In that fact, I kind of go maybe it’s time, Reddit has replaced it. The current ownership of Yahoo is not putting in money or updating properties. Likely it’s time. Though Ars Technica had an article that the far right is saying this is to silence their speech, this far more falls into a straight business decision to turn off the lights of a very lightly used service than to actually put the thought into what happens if they do.

Yahoo was nice enough to provide a method to export your data. Since I’m a data hoarder of content that I have wittingly or unwittingly generated – I wanted to get my data before it was gone. It took me four attempts. The first three about an hour after I submitted the request to back up not all my Yahoo data – but just the section from answers – I received an email saying that I had canceled my own request. I assure you I did not. I wasn’t even on the page the second or third time (for fear that a page refresh canceled the request). So that was my first hurdle. But once again, the current owner isn’t dumping money into the company to fix things – so you have to expect that some of the rubber bands have snapped and the duct tape is torn by now.

The fourth attempt though worked, though not necessarily as advertised. I downloaded a 6.2kb zipped file and was happy that it wasn’t canceled though it took 18 hours to generate (lack of money means poorly fed hamsters). I thought the file was a bit small – since I can generate a 3kb text file in my daily journal. But it was compressed, so maybe they spent hours really really compressing it and making it small. They did not.

When I opened this file, all that it contains was some .json file (which is fine) and privacy notices. Opening the JSON files, they include solely my basic profile information. It did not include any questions I asked (1) or any I answered (80). You literally are better off and closer to complete to save your Yahoo Answers page to a PDF or an Image. It’s easier to read and gives you a better look in the long run at what the profile would have looked like. Just all around better.

Now I’m going to go through and manually copy my data out of the service. I have a few weeks and not much data to grab. I just shouldn’t have to. When I submit a data request that offers to export all of my data out of service – I expect all of my data. It doesn’t matter if the data is important or not. Secondary gripe – everything gives a vague time period (such as over a decade ago) and no real dates. Which for a life logging cataloguer is very very annoying and likely would have been included in a proper data export.

While few people reading this will even bother to export their data, just don’t bother. Look at your profile and manually grab what you need. Between the hoops and problems of the export process, and the lack of information exported, it’s just going to be easier and more complete to do it yourself instead of trusting the starving hamsters to do an accurate job.

]]>
Playing With Wordgrinder https://creeva.com/index.php/2021/03/30/playing-with-wordgrinder/ Tue, 30 Mar 2021 21:55:31 +0000 https://creeva.com/?p=96108

I have an overall goal of trying to move one laptop over to a terminal-based only laptop one day. However, I can’t do that until I can survive more than a day or two in the shell. It’s not a goal of “only” having a system that is GUI-free. I have multiple systems. I just want one singular system that is terminal only and mobile (because I do have multiple headless Raspberry Pis in my environment.

So while I can use vi (but really I normally use Nano) – I wanted something slightly simpler yet more powerful if that makes sense? So I was looking at command line-based word processors and wistfully remembering the ones I loved. My favorite was a dos-based one I used around 1993 or 1994 called “write”. Try googling that and finding an accurate result. My next favorite word processor of all time was actually windows-based, Wordperfect 5.1 for Windows 3.1. The Geowrite part of Geoworks (a GUI shell that competed with Windows) was another that was much loved. Other than that it becomes kind of meh, I have no love for other word processor software. Normally I use notepad or whatever plain text GUI equivalent exists for whatever operating system I’m using. So back to looking.

As I said, I do quite a bit with Nano in the command line. For a text editor (not a word processor) I still love editing from DOS, but Nano works very well for my needs. Now the bare minimum has been established, what can I get that’s a little more? If you ask around online, everyone gives one main example – Wordgrinder. So since I was just starting out the search, who was I to argue? Sudo apt-get me some Wordgrinder.

I fired it up once it was finished and at first I was impressed but did have some annoyances. The first is that I wasn’t happy with white on black. I decided I wanted to go old school and do bright green text on black, but I couldn’t figure out how. So I went on Reddit and asked if anyone knew how. Someone was nice enough to answer, they went through the original source code, and there was no option for changing font (or background) colors. What you got was what you get. Not that I’m ungrateful, but things might still be great (kinda).

So color choices aside – one thing that isn’t in the image above is there is a top and bottom ribbon (I disabled that). You also can only bring up the menu by hitting escape. However, one thing that psychologically was an issue for me was that typing was always at the middle of the page. You can scroll up or down, but your active data line is in the center. It’s my issue, not an application issue. I’m sure I could have gotten over it.

I attempted to do my daily journal entry. Writing it in Wordgrinder is easy and smooth. I went back a few times to do later updates before the day ended. Then the problems happened. I had my file named properly with a txt extension. My script pulled it into my daily activity file – and then there was markup everywhere. Even though I didn’t have any special markup in my file, the application saved its own anyways. I’m sure there may be a setting or something I could use to change this behavior, but frankly for my desires and the fact that I was bowled over otherwise – I’m moving on. For many, this would be a great program. If you are looking to edit plain text with word processor options – this isn’t it. Or it might be just a flag I didn’t set while starting the program. There is absolutely nothing bad I can say about the program other than no color change and the middle of the screen typing. It just didn’t fit with my workflow and processing needs. Time to find the next experiment.

]]>
Sometimes The Right Text Editor is Right In Front of You https://creeva.com/index.php/2021/03/23/sometimes-the-right-text-editor-is-right-in-front-of-you/ Tue, 23 Mar 2021 20:55:19 +0000 https://creeva.com/?p=96103

I had an issue with my daily journal script. If I collected all the data from sources, generated my daily post, and then made notes – I was leaving out too many details. If I kept a separate file with notes and then pasted it in, that didn’t help me either. Especially if I’m away from my computer, I wanted something that I can use from my phone. It was important that it could sync through Dropbox (hence use it anywhere). Finally, it should be saved with plain text or a simple markup where I could script out the extra markup details.

At first, I went to the de-facto standard – Evernote. This is where millions of people make daily notes to organize their lives. It also works with IFTTT, which means I can use Dropbox with it. So I spent an hour testing this solution, unfortunately, Evernote can mark a post as complete before you are done typing. This means IFTTT pulls and processes it before you might be done typing. This is an issue because you may lose relevant data once it enters your workflow. After wiggling and jiggling, dancing and prancing – anything to try to make this work I gave up. I thought about OneNote – but it didn’t work with IFTTT. It also likely would generate too much traffic to offload it to a similar service.

To the plain text file, it was. I went through over a dozen free applications. I don’t want to disparage the fact that the 9.99 text editor for the iPhone may have done exactly what I wanted. I also didn’t want to pay for something when all I needed was the simplest of editors. Some had to manually sync. Some had a completely intolerable interface. Some advertised functions that weren’t actually there. I was fed up – life isn’t supposed to be this hard for something with two requirements – plain text and Dropbox.

Well, it turns out that Dropbox has a plain text editor built into it. If I specify my save folder – it even hits my workflows and scripts correctly. I’ve been using this for the last 4 days, and you know what? It works great. I wasn’t aware there was a text editor built into the phone app before. That is how you test a simple issue over a dozen different ways and end up with a simpler solution.

]]>
Doing The RSS Feed Cleanup Dance https://creeva.com/index.php/2021/03/16/doing-the-rss-feed-cleanup-dance/ Tue, 16 Mar 2021 17:30:33 +0000 https://creeva.com/?p=96092

As I’m going through the daily journal generation script I previously mentioned, I made it to the point that I have most of what I can grab on a daily basis. I’m now working on “maybe one-day” items. So this led to Feedburner.com. You see, Feedburner and I go back a long time. Back when APIs were hard to come by and RSS feeds ruled the world – I routed all my accounts through the Feedburner. It also has the advantage of generating daily emails of RSS items which I still have for historical tracking. I was using it as IFTTT before IFTTT was a thing. It was my central hub which I could manipulate data from.

However, let’s just say it has been a few years since Feedburner and I had any real contact. Saying it has been a decade would likely be about accurate. I showed up sheepishly and looked around, kind of asked how it was going, and just checked in. While on the surface everything seemed to be chugging along as I left it – underneath the still waters was some anger. You see while 10-15 years ago might have been the height of public profiles and web services – many of those did not go on.

I started by clicking the feed information for the most likely dead and buried web services. Instead of pulling up a page, it showed a 404 error. Then I did the same thing. Feeds that I knew did exist and were good are still working correctly. So while I have the bulk of the work done – I had to wade through over 100 different feeds. Kind of clicking and saying “hey I haven’t been to this domain in a decade what is going on”.

The most amusing to me is the domains that were purchased by other companies. Later they’ll be going through their referral log and go what is this? I’ve had to have deleted over 60 defunct and dead feeds. Some of the services it was meh – I only used them to test the service. Other services gave me a mental tear going down my face – some good service names or web apps went to dust.

We are now in 2021 and instead of RSS feeds being heavily prevalent (and back when I used to do massive cross-posting) – now it’s all APIs and cross-posting is almost dead. It was a different time and a different era. In many ways, the internet has gotten smaller – or maybe just less brave in putting out new ideas. Many things moved to apps and some of those apps don’t even have standard web page front ends anymore. The idea of the open-sharing web is almost completely dead.

Here I am writing on a blog. A personal blog not monetized – just sharing my thoughts away. This site itself is a dinosaur of another era. Someday all of it gets trapped and vanishes into the tar pits.

]]>
Scripting…Scripting..Scripting The Night Away… https://creeva.com/index.php/2021/03/12/scripting-scripting-scripting-the-night-away/ Fri, 12 Mar 2021 19:10:56 +0000 https://creeva.com/?p=96081

The Start

For those that have known me for a while, one of my obsessions interests is archiving and keeping all the data I generate online. I do this to the best of my ability since obviously there are some services that you just can’t get the data out of (at least not on a regular basis – obviously, some services allow you to bulk export your data and I have tons of that from services in the internet graveyard). So I’ve always meant to do something with it. I figured way back when I would just script it.

Originally it was all on my blog doing a daily dive of all my traffic and generating my lifestream. However, once you stop posting regularly it really gets spammy – so I stopped. Behind the scenes though – the spice kept flowing. Flow it did, into tons of files with new information being generated each day. So about three years ago (well, after there was about 14 years’ worth of data I had collected) – I decided to throw it all into a wiki. Sounds great – I have plain text – I can do something with this.

The Plan

Which wiki to choose though? I was comfortable with Media-wiki – but similar to just using WordPress, it locks everything up in a database. Now I understand a database can be scripted, adjusted, poked, and prodded. I’m just not that great with scripting against databases. I have done it – when I was a consultant I wrote a script that could dump out information from MS-SQL or embedded DB2 databases. I just didn’t like all my data in a single database file. I would rather lose one than lose them all.

This lead me to look for a wiki that had flat files. Optimally the files would be plain text and accessible. After searching I decided upon Dokuwiki. It used plain text files for its pages – and worked just like a regular wiki from the end-user perspective (syntax differences aside). I installed it. Then I did nothing with it. I then installed it on the home network again. Then I did nothing with it again. This cycle went on for a while. I saved the pages in hopes of building something new from the ashes eventually.

Baby Steps

Honestly what made Dokuwiki stick was starting to do genealogy. I needed a way to capture documents in a better method than regular software was allowing me to do. I started entering data and formatting it, scrubbing it, tweaking it – essentially making it mine. Then one day I look up I’m a regular DokuWiki user. Its regular use was finally ingrained into me. I would check and change things at least weekly. I started migrating some of my documentation over (the kind that can’t be as easily scripted – but while we are on the subject, my scripts are in the wiki also).

We even had the whole family involved – well my wife is reticent. She’s finally seeing the value but is going to host her own version on her own Raspberry Pi. However, my son and I use the home network hosted on a Mac. It even syncs across our machines, so in a pinch offline we can still use it. Maybe that use case will come to pass as we enter the final stretch of the pandemic and finally cross the finish line.

So a year later (I mean this is a 14-year journey – time is meaningless in the vastness of the internet – it’s either milliseconds or an eternity) I think “Huh” – you know what would be a great idea. I could use the wiki and actually do something with all that old data I have. I could also generate pages off of my stuff going forward. As I said this to myself, I also remembered that was the whole reason I started this in the first place. It wasn’t meant for manual data transfer over time. It was meant for up-to-the-day scripted and gathered information. Now how to do it?

The Script

In theory, I knew what I was going to do. I wrote a MacOS script that parsed data as a customer project before. I’ve used that as a starting point for other projects. Just a cat cat here and a sed sed there – here a grep there a grep – everywhere a grep grep grep. However, the plan is eventually to move this to a Raspberry Pi and have it run on a schedule to grab and sort the data. So that means time to do it on Linux. Because sometimes the quirks between a Linux and Mac script can be painful Thankfully I have both a Mac and Linux laptop.

So time to go into Dokuwiki and make a template. I made what I originally considered the end all be for what I needed in a template. Then I configured the script to generate it. Then as I’m scripting I changed it at least two dozen times. But I finally generated what I thought was a good copy. I copied the file to the directory on the wiki and it worked in the browser without issues. I was like Strong Bad pounding away at the keyboard making it go.

Because the up-to-the-minute generation of the file would be a pain in the butt, I decided I would use the Yesterday is Today approach. The file would be generated on yesterday’s data. Then I can add my own personal notes to fill in the gaps. The goal when this is automatic and I’m not tweaking things at night is to make my notes in the morning. This isn’t working well right now because I update the final page in the evening (of the day after it occurred).

It’s all working so far, and while I rely heavily on files generated by IFTTT, I hope to work on using more API calls to directly generate the information. That script section will be once everything is working though. It’s a bigger project of knowledge – so I’m on a bicycle version of script writing. It’s not a motorcycle, but the training wheels are off as I clean up the spaghetti script regularly.

The Output

Now my daily journal has too much private data for me to share a screenshot of a page. Once I build test data (which I started yesterday) – where I can check everything and how it looks, you’ll just have to read.

The top of the page has the journal date a fancy graphic, and navigation to the journal archive – archive by year – archive of the current month – yesterday and tomorrow. Then we get to the sections

  • Calendar
    • Lists each Google Calendar Event by the hour from the day
  • Notes (this is where I do my daily updates)
  • Blog Posts
  • Social Media
    • Facebook
      • Status Updates
      • Links Shared
      • Image Uploads
      • Images I was tagged in
    • YouTube
      • Uploaded to the main account
      • Uploaded to my Retrometrics account
      • Liked videos
    • Reddit
      • Posts
      • Comments
      • Upvotes
      • Downvotes
      • Saved posts
    • Twitch – lists all streaming sessions
    • Instagram – all images
    • Twitter
      • Posts
      • Links
      • Mentions
  • Travel
    • Foursquare Check-ins
    • Uber Trips
  • Health
    • Fitbit Data (not done yet)
    • Strava Activities
  • Media
    • Goodreads Activity
    • PSN Trophies
    • Beaten Video Game Tracking (Grouvee)
  • Home Automation
    • Music played on Alexa
    • Videos played on Plex
    • Nest Thermostat Changes
  • World Information
    • Weather
    • News Headlines (still working on)
  • To Do
    • Todoist tasks added
    • Todoist tasks deleted
  • Shopping List
    • Shopping List Items Added
    • Shopping List Items Removed
  • Contacts
    • Google Contacts Updates
    • Twitter Followers
  • Errata
    • Just personal information for cross reference, but this is just the start of the foot which is all static information.

That, my friends, is my project for the last week. It also includes sanitizing and normalizing the input data. I’ve been a scripting fool and expect to keep at it for at least one more week. Then, maybe I’ll be able to walk away from it – just fix things when they are broken.

Though for what it’s worth, it inspired me to write and use my blog. I guess that’s something.

]]>
Running Windows Programs Via Alexa https://creeva.com/index.php/2017/07/12/running-windows-programs-via-alexa/ Wed, 12 Jul 2017 21:20:06 +0000 https://creeva.com/?p=95420 Like half of America, I made a purchase yesterday that threw me into the Amazon ecosystem. Tomorrow I’ll be the proud owner of a couple of Amazon Echo devices. One thing that really interested me was the ability to extend the system. With Siri, I managed to a raspberry pi and control my lights via voice, but the further extension was doable – but painful. I wasn’t positive about what I was going to do with it at first beyond the simple things. So, like any good geek, I started to research. Since there is an Alexa App built into the Amazon IOS app, I started playing around with a few things. The one thing I can say I’m disappointed with is that I need to allow internet access to my plex server if I wanted to control that. That scenario is out. So at present time streaming easily from iTunes. However, I did get my lights working and controlled via voice. There was a bit of trial and error since I’m new to the setup, but first-timers without having done things many different ways, shouldn’t have any issues. This leads up to my love of ifttt and the fact that I still run tons of desktop utilities. Since many utilities run on remote servers, I normally log in remotely and trigger them manually. I thought it would be great to get a couple of these working with Alexa. However, there seem to be many options to do this. Many are convoluted so, while mine is simple to me, it may not be the best for you. To start with you need an Alexa device (mine was my iPhone) and have it connected to Ifttt (If This Then That). The next thing you need is a dropbox account (or any other cloud storage provider supported by Ifttt). So what I did was create a new ifttt applet that runs when I tell Alexa “trigger network backup”. This applet creates a file in dropbox called runme.txt. I created a scheduled task that runs a vbs script. This script is solely so I can run a batch file silently. You can run the batch file directly – but you will get the annoying cmd pop-up. The batch file that is called by the scheduled task as follows:

@echo off del c:\scripts\ifttt\ifttt.bat type c:\users\me\dropbox\ifttt\runme.txt >c:\scripts\ifttt\ifttt.bat del c:\users\me\dropbox\ifttt\runme.txt c:\scripts\ifttt\ifttt.bat del c:\scripts\ifttt\ifttt.bat

In case of reboots or hangs I have the running batch file deleted. This runs as a scheduled task that kicks off every 5 minutes. So my commands are delayed, but it required the bare minimum of system resources and auxiliary software. I may change to a more instant method in the future. When I set up a specific trigger on Ifttt the contents of runme.txt have the path to the batch file I want to run. So if I say “trigger network backup”, ifttt looks at the applet, it then makes the runme.txt file with the contents being c:\scripts\backup.bat. When this runs it kicks off a robocopy job that mirrors my servers. I can also make a trigger back that alerts me when a job is done. As I create more scripts or logic to run I can do this for many other tasks in the future. I’ll be able to trigger jobs on different systems by changing a different machine to watch for runme2.txt, runme3.txt, etc. It’s simple and efficient. Now, if only I could rename the wake word of the echo to Jane…

]]>
Playstation 4 – Remote Play And You https://creeva.com/index.php/2017/03/09/playstation-4-remote-play-and-you/ Thu, 09 Mar 2017 18:45:45 +0000 https://creeva.com/?p=95390 Like Legion, this is another one where I just might be late to the party. I don’t think so though. I think is something that people are not really aware of. The most important thing about this feature that you likely aren’t using is that it is free. Free as in beer and not free as in speech. You do have to own the necessary components, but most people reading this site that are in the Playstation camp likely already have everything they need. So let’s go over it to catch everyone up with what it is and why it is useful.

What is Remote Play

Remote Play allows you to stream games from the PlayStation 4 to a compatible device. Don’t have a PlayStation 4 to take with you on the road? Well if you have a laptop and a controller, you are good to go. For more advanced geeks, think of it as Remote Desktop or VNC. The difference is that the streaming is good enough to handle videos. The PlayStation Now service currently allows you to stream PS3 games to your PS4 console. This is just a private version of that service. Remote Play works across a local network or across the Internet. The limiting factor is only the bandwidth available. I’ve had it work in hotels to home Internet – but your mileage may vary.

What Supports Remote Play

Primarily Remote Play supports Windows and macOS. There is also the Sony Experia cell phone, the PlayStation TV, and the Vita that supported this functionality. I have seen a discussion that the Vita and PlayStation TV support is being dropped. This is personally annoying since I am one of the few people who own a Playstation TV device.

Do I Really Need a Playstation 4 Then?

Remote Play performs a connection back to your personal Playstation 4. The service will not work in any other way. Simply put, you are required to own a PlayStation 4.

What Are The Limitations?

The biggest limitation I have found is this only works in a single-user mode. That means if my wife is watching Netflix on the Playstation 4 in the living room, I can’t stream a game to my office. Only a single person can access a Playstation 4 (locally or remotely) at a time. If you live in a household like mine where the PlayStation 4 is the main media backend on the TV, you are stuck until it is free. On the plus side, if this is a service you start using all the time, you have an excuse to buy an extra one to stick in a closet or basement attached to your network. If you feel even more at hacking away at new features – these can also be done with the PlayStation 3. The Playstation 3 supported Remote Play for PSP and Vita devices. By reverse engineering the protocol, some enterprising developers made a version for PS3 to Windows/Mac clients. You can read more about that here. Have any of you ever actually done Remote Play, or am I the only one?

]]>
How To Use Two Sets of Bluetooth Headphones at the Same Time with OSX/MacOs https://creeva.com/index.php/2017/03/05/how-to-use-two-sets-of-bluetooth-headphones-at-the-same-time-with-osxmacos/ Sun, 05 Mar 2017 18:31:08 +0000 https://creeva.com/?p=95271 Two nights ago my wife and were in the hotel sharing a room with my son. The last time this happened my son stayed up most of the night since my wife and I couldn’t fall asleep early. So this time we both brought bluetooth headphones and were going to watch Netflix. Since my son was on the sofa bed, we could position the laptop to not keep him awake. We thought this was going to be an easy exercise, but it was harder than we thought. After 30-40 minutes we had the whole thing working. I thought I would write it up so no one else had to go through this. Most the discussions when you search for this in Google are on message boards with multiple links sending you somewhere else. Here are the steps that are required.

  1.  Connect all your Bluetooth headphones to your Mac
  2. Go to Applications > Utilities
  3. Open “Audio Midi Setup”
  4. In the plus sign at the bottom left, click and select Multi-Output Device
  5. Rename your Multi-Output Device to something memorable
  6. In the window select both sets of your Bluetooth headphones (if they have microphones they will show up twice, Microphones have 1 channel, and Headphones/Speakers have 2).

Now you have everything configured you can right-click your new multi-output device and select “use this device for sound output”. For immediate concern, you should be all set. If this is something that is going to be in regular use, I recommend a couple of other things. The first is to have your volume control show up on the menu bar. To do this

  1. Open System Preferences
  2. Click on Sound
  3. Under Output Devices you can see “Show Volume on the Menu Bar” at the bottom – enable this.

On this screen, you can also control different outputs switching between internal speakers and your new multi-output device. However, once you have the volume control in the Menu Bar – you can switch between different outputs quickly. They will show up underneath the volume control when you click the speaker. Airplay and other output devices will show up as selections. I’m just hoping for someone else this becomes a 5-minute fix instead of spending an hour clicking different links.

]]>
Certified Ethical Hacking Training – Day 1 https://creeva.com/index.php/2016/10/26/certified-ethical-hacking-training-day-1/ Wed, 26 Oct 2016 22:20:09 +0000 https://creeva.com/?p=95183 Today I started taking the training courses for the Certified Ethical Hacker training. Until today, I’ve been working on reading through the manuals and other books. I wanted to write about it, just to lock some of the training to a point in time for my memory. The online training is a little kludgy on how they approach things. You have to sign up for four accounts – your video training account, your transcender account (which doesn’t work on OSX/Chrome combo, so I’ll need to troubleshoot that), a site for your courseware to download (I’m still downloading, about 30GB or more), and the virtual labs account. In theory, I don’t need to download all the lab tools, since I have access to their VM portal. However, I figured it was easier to grab them all now instead of hunting for them at a later date. I was also annoyed that the PDFs for the courseware are DRM locked and require an actual copy of Adobe Acrobat to read. I moved away from Adobe due to the constant patching it requires – but here we are again. Getting all this stuff together wasn’t fun, and a constant sign-up and configure that took up half the day. Since this certification has been around for years, I figured everything would be more streamlined with ease of use and a focus on open-source software. The open-source side is important since most of the tools they are training you on are open-source packages. However, requiring close-sourced software to read the PDFs – that’s just annoying compared to everything else. The streaming solution isn’t optimal, but so far it works well enough. I didn’t notice at first, but when you mark a module complete it auto-starts the next module. That is slightly annoying especially since they didn’t warn you. The instructor in the videos is good. He explains things in a well-thought-out manner. I’ve made it through three videos – finishing the introduction to the ethical hacking module. In this module, they explained the types of hackers, different types of pen testing, the big hacking incidents of 2014, the steps in a pen test, and international hacking laws. All in all, it was a good day. Most of the items covered I was already aware of, but it was a good refresher.

]]>