I have a Red Hat system in AWS running Spark on top of HDFS. Now I want to access PySpark from my local machine, i.e., interactive Python.
So, I installed Spyder-Py2 to connect to the remote AWS machine, so I can access Spyder Python.
Route:
But it fails with the below error:
Unable to connect to IPython kernel-1234.json
What am I missing here?
Note: The server on the AWS VM is still running.
# Start the IPython Kernel on the AWS Machine using the following command
jupyter console --existing
# Copy the Kernel Connection File to Your Local Machine (modify as needed)
scp -i /path/to/your-aws-pem-key.pem your_user@your_aws_public_ip:/home/your_user/.local/share/jupyter/runtime/kernel-1234.json /local/path/
# Modify the Kernel Connection File. Open the kernel-1234.json file on your local machine and change the IP address (usually 127.0.0.1 or localhost) to the public IP address of your AWS machine.
{
"shell_port": 55615,
"iopub_port": 55616,
"stdin_port": 55617,
"control_port": 55618,
"hb_port": 55619,
"ip": "your_aws_public_ip",
"key": "some_random_key",
"transport": "tcp",
"signature_scheme": "hmac-sha256",
"kernel_name": ""
}
# Configure SSH Tunneling
ssh -i /path/to/your-aws-pem-key.pem -L 55615:localhost:55615 -L 55616:localhost:55616 -L 55617:localhost:55617 -L 55618:localhost:55618 -L 55619:localhost:55619 your_user@your_aws_public_ip
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With