Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions e2e/e2e-minikube.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
#!/bin/bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

### This script can be used to run integration tests locally on minikube.
### Requirements: minikube v0.23+ with the DNS addon enabled, and kubectl configured to point to it.

set -ex

### Basic Validation ###
if [ ! -d "integration-test" ]; then
echo "This script must be invoked from the top-level directory of the integration-tests repository"
usage
exit 1
fi

# Set up config.
master=$(kubectl cluster-info | head -n 1 | grep -oE "https?://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}(:[0-9]+)?")
repo="https://github.com/apache/spark"
image_repo=test

# Run tests in minikube mode.
./e2e/runner.sh -m $master -r $repo -i $image_repo -d minikube
39 changes: 39 additions & 0 deletions e2e/e2e-prow.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
#!/bin/bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a comment for what this script does and what are the requirements for running the script?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

### This script is used by Kubernetes Test Infrastructure to run integration tests.
### See documenation at https://github.com/kubernetes/test-infra/tree/master/prow
### To run the integration tests yourself, use e2e/runner.sh.

set -ex

# Install basic dependencies
echo "deb http://http.debian.net/debian jessie-backports main" >> /etc/apt/sources.list
apt-get update && apt-get install -y curl wget git tar
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. So this works only for debian? Maybe mention this as a requirement at the file level comment?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mentioned above - this is specific to the k8s testing environment used by the Kubernetes project. It's not intended to be used elsewhere.

apt-get install -t jessie-backports -y openjdk-8-jdk

# Set up config.
master=$(kubectl cluster-info | head -n 1 | grep -oE "https?://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}(:[0-9]+)?")
repo="https://github.com/apache/spark"

# Special GCP project for publishing docker images built by test.
image_repo="gcr.io/spark-testing-191023"
cd "$(dirname "$0")"/../
./e2e/runner.sh -m $master -r $repo -i $image_repo -d cloud

# Copy out the junit xml files for consumption by k8s test-infra.
ls -1 ./integration-test/target/surefire-reports/*.xml | cat -n | while read n f; do cp "$f" "/workspace/_artifacts/junit_0$n.xml"; done
117 changes: 117 additions & 0 deletions e2e/runner.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
#!/bin/bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

usage () {
echo "Usage:"
echo " ./e2e/runner.sh -h Display this help message."
echo " ./e2e/runner.sh -m <master-url> -r <spark-repo> -i <image-repo> -d [minikube|cloud]"
echo " note that you must have kubectl configured to access the specified"
echo " <master-url>. Also you must have access to the <image-repo>. "
echo " The deployment mode can be specified using the 'd' flag."
}

### Basic Validation ###
if [ ! -d "integration-test" ]; then
echo "This script must be invoked from the top-level directory of the integration-tests repository"
usage
exit 1
fi

### Set sensible defaults ###
REPO="https://github.com/apache/spark"
IMAGE_REPO="docker.io/kubespark"
DEPLOY_MODE="minikube"

### Parse options ###
while getopts h:m:r:i:d: option
do
case "${option}"
in
h)
usage
exit 0
;;
m) MASTER=${OPTARG};;
r) REPO=${OPTARG};;
i) IMAGE_REPO=${OPTARG};;
d) DEPLOY_MODE=${OPTARG};;
\? )
echo "Invalid Option: -$OPTARG" 1>&2
exit 1
;;
esac
done

### Ensure cluster is set.
if [ -z "$MASTER" ]
then
echo "Missing master-url (-m) argument."
echo ""
usage
exit
fi

### Ensure deployment mode is minikube/cloud.
if [[ $DEPLOY_MODE != minikube && $DEPLOY_MODE != cloud ]];
then
echo "Invalid deployment mode $DEPLOY_MODE"
usage
exit 1
fi

echo "Running tests on cluster $MASTER against $REPO."
echo "Spark images will be created in $IMAGE_REPO"

set -ex
root=$(pwd)

# clone spark distribution if needed.
if [ -d "spark" ];
then
(cd spark && git pull);
else
git clone $REPO;
fi

cd spark && ./dev/make-distribution.sh --tgz -Phadoop-2.7 -Pkubernetes -DskipTests
tag=$(git rev-parse HEAD | cut -c -6)
echo "Spark distribution built at SHA $tag"

if [[ $DEPLOY_MODE == cloud ]] ;
then
cd dist && ./sbin/build-push-docker-images.sh -r $IMAGE_REPO -t $tag build
if [[ $IMAGE_REPO == gcr.io* ]] ;
then
gcloud docker -- push $IMAGE_REPO/spark-driver:$tag && \
gcloud docker -- push $IMAGE_REPO/spark-executor:$tag && \
gcloud docker -- push $IMAGE_REPO/spark-init:$tag
else
./sbin/build-push-docker-images.sh -r $IMAGE_REPO -t $tag push
fi
else
# -m option for minikube.
cd dist && ./sbin/build-push-docker-images.sh -m -r $IMAGE_REPO -t $tag build
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. Where are this build dir and the build script? What if they don't exist?

Copy link
Member Author

@foxish foxish Jan 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

build and push are arguments that the script takes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They need not be valid directories. The build script is part of the upstream spark distribution, and exists on our fork as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. This is spark/dist dir inside a spark repo.

fi

cd $root/integration-test
$root/spark/build/mvn clean -Ddownload.plugin.skip=true integration-test \
-Dspark-distro-tgz=$root/spark/*.tgz \
-DextraScalaTestArgs="-Dspark.kubernetes.test.master=k8s://$MASTER \
-Dspark.docker.test.driverImage=$IMAGE_REPO/spark-driver:$tag \
-Dspark.docker.test.executorImage=$IMAGE_REPO/spark-executor:$tag" || :

echo "TEST SUITE FINISHED"